At the core of the project is an interactive analysis and learning tool that examines online content using methods from natural language processing (NLP) and explainable artificial intelligence (XAI). The system identifies typical patterns and characteristics of manipulative content and visualizes them in a transparent and understandable way for users. This approach not only enables the classification of misleading content but also helps users understand the mechanisms behind disinformation.
A key element of the project is a participatory development process. Teachers, educational multipliers, and students are actively involved in workshops and co-design activities to identify requirements and develop suitable concepts for a digital learning tool. The resulting prototype will be tested in educational settings and iteratively improved. In addition, guidelines and workshop formats will be developed to support teachers in integrating the dashboard into classroom teaching.
FZI is responsible for the conceptual design and development of the AI-powered dashboard. This includes the development of methods based on natural language processing (NLP) and explainable artificial intelligence (XAI) to analyze disinformation and the technical implementation of the system. In addition, FZI organizes the participatory development process involving schools, teachers, and students. Through workshops and co-design activities, requirements are identified and prototypes are jointly developed and tested. FZI also conducts scientific evaluations and supports the iterative improvement of the system in educational practice.
Drawing on our expertise in applied AI, computational social science, and medical technology, we develop solutions for the political, business, and civil sectors.


