Unsere Forschung: KI-Systeme verstehen, verantwortungsvoll nutzen, besser machen
Künstliche Intelligenz verändert Wirtschaft und Gesellschaft – doch die Möglichkeiten und Grenzen dieser Anwendungen sind oft schwer zu durchschauen. An der KI-Akademie OWL erforschen wir, wie KI-Systeme sicherer, effizienter und transparenter gestaltet werden können. Der AI-Act der Europäischen Union reguliert KI-Verfahren basierend auf dem möglichen Risiko. Doch was bedeutet dieses genau und was bedarf es für einen verantwortungsvollen Umgang des Menschen mit KI?
Wir untersuchen die Risiken hochkomplexer KI-Modelle, helfen Unternehmen und Anwendern, sie besser einzuschätzen, und entwickeln alternative und maßgeschneiderte Lösungen, die weniger Rechenleistung und Daten benötigen. Unser interdisziplinärer Ansatz verbindet Technik mit Sozial- und Geisteswissenschaften, um KI-Methoden verständlich, inklusiv und nachhaltig zu machen.
Dabei stehen nutzerfreundliche KI-Technologien, Erklärbarkeit und Vertrauen im Fokus. Unsere Forschung reicht von grundlegenden Algorithmen bis hin zu praktischen Anwendungen, die direkt in Wirtschaft, Bildung und Gesellschaft einfließen. Dabei konzentrieren wir uns auf die Domänen der KI-Sicherheit und KI-Verfahren auf der Basis geringer Datenmengen im Bereich der Inklusion.
Unser Ziel: KI-Systeme, die nicht nur leistungsfähig, sondern auch nachvollziehbar und verantwortungsvoll einsetzbar sind.
Unsere Publikationen
2026
Kuhl, Ulrike; Bush, Annika
When Bias Backfires: The Modulatory Role of Counterfactual Explanations on the Adoption of Algorithmic Bias in XAI-Supported Human Decision-Making Proceedings Article
In: Guidotti, Riccardo; Schmid, Ute; Longo, Luca (Ed.): Explainable Artificial Intelligence, pp. 249–273, Springer Nature Switzerland, Cham, 2026, ISBN: 978-3-032-08333-3.
@inproceedings{10.1007/978-3-032-08333-3_12,
title = {When Bias Backfires: The Modulatory Role of Counterfactual Explanations on the Adoption of Algorithmic Bias in XAI-Supported Human Decision-Making},
author = {Ulrike Kuhl and Annika Bush},
editor = {Riccardo Guidotti and Ute Schmid and Luca Longo},
isbn = {978-3-032-08333-3},
year = {2026},
date = {2026-01-01},
booktitle = {Explainable Artificial Intelligence},
pages = {249–273},
publisher = {Springer Nature Switzerland},
address = {Cham},
abstract = {Although the integration of artificial intelligence (AI) into everyday tasks improves efficiency and objectivity, it also risks transmitting bias to human decision-making. In this study, we conducted a controlled experiment that simulated hiring decisions to examine how biased AI recommendations - augmented with or without counterfactual explanations - influence human judgment over time. Participants, acting as hiring managers, completed 60 decision trials divided into a baseline phase without AI, followed by a phase with biased (X)AI recommendations (favoring either male or female candidates), and a final post-interaction phase without AI. Our results indicate that the participants followed the AI recommendations 70% of the time when the qualifications of the given candidates were comparable. Yet, only a fraction of participants detected the gender bias (8 out of 294). Crucially, exposure to biased AI altered participants' inherent preferences: in the post-interaction phase, participants' independent decisions aligned with the bias when no counterfactual explanations were provided before, but reversed the bias when explanations were given. Reported trust did not differ significantly across conditions. Confidence varied throughout the study phases after exposure to male-biased AI, indicating nuanced effects of AI bias on decision certainty. Our findings point to the importance of calibrating XAI to avoid unintended behavioral shifts in order to safeguard equitable decision-making and prevent the adoption of algorithmic bias. In the interest of reproducible research, study data is available at: https://github.com/ukuhl/BiasBackfiresXAI2025.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Although the integration of artificial intelligence (AI) into everyday tasks improves efficiency and objectivity, it also risks transmitting bias to human decision-making. In this study, we conducted a controlled experiment that simulated hiring decisions to examine how biased AI recommendations - augmented with or without counterfactual explanations - influence human judgment over time. Participants, acting as hiring managers, completed 60 decision trials divided into a baseline phase without AI, followed by a phase with biased (X)AI recommendations (favoring either male or female candidates), and a final post-interaction phase without AI. Our results indicate that the participants followed the AI recommendations 70% of the time when the qualifications of the given candidates were comparable. Yet, only a fraction of participants detected the gender bias (8 out of 294). Crucially, exposure to biased AI altered participants' inherent preferences: in the post-interaction phase, participants' independent decisions aligned with the bias when no counterfactual explanations were provided before, but reversed the bias when explanations were given. Reported trust did not differ significantly across conditions. Confidence varied throughout the study phases after exposure to male-biased AI, indicating nuanced effects of AI bias on decision certainty. Our findings point to the importance of calibrating XAI to avoid unintended behavioral shifts in order to safeguard equitable decision-making and prevent the adoption of algorithmic bias. In the interest of reproducible research, study data is available at: https://github.com/ukuhl/BiasBackfiresXAI2025.
Gusmita, Ria Hari; Firmansyah, Asep Fajar; Zahera, Hamada M.; Ngomo, Axel-Cyrille Ngonga
ELEVATE-ID: Extending Large Language Models for End-to-End Entity Linking Evaluation in Indonesian Journal Article
In: Data & Knowledge Engineering, vol. 161, pp. 102504, 2026, ISSN: 0169-023X.
@article{GUSMITA2026102504,
title = {ELEVATE-ID: Extending Large Language Models for End-to-End Entity Linking Evaluation in Indonesian},
author = {Ria Hari Gusmita and Asep Fajar Firmansyah and Hamada M. Zahera and Axel-Cyrille Ngonga Ngomo},
url = {https://www.sciencedirect.com/science/article/pii/S0169023X25000990},
doi = {https://doi.org/10.1016/j.datak.2025.102504},
issn = {0169-023X},
year = {2026},
date = {2026-01-01},
urldate = {2026-01-01},
journal = {Data & Knowledge Engineering},
volume = {161},
pages = {102504},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2025
Ngoli, Tatiana Moteu; Kouagou, N'Dah Jean; Zahera, Hamada M.; Ngomo, Axel-Cyrille Ngonga
Benchmarking Knowledge Editing using Logical Rules Proceedings Article Forthcoming
In: The Semantic Web – ISWC 2025, Forthcoming.
@inproceedings{moteu2025benchmarkingke,
title = {Benchmarking Knowledge Editing using Logical Rules},
author = {Tatiana Moteu Ngoli and N'Dah Jean Kouagou and Hamada M. Zahera and Axel-Cyrille Ngonga Ngomo},
url = {https://papers.dice-research.org/2025/ISWC_Benchmarking-KE/public.pdf},
year = {2025},
date = {2025-12-31},
urldate = {2025-01-01},
booktitle = {The Semantic Web – ISWC 2025},
keywords = {},
pubstate = {forthcoming},
tppubtype = {inproceedings}
}
Bakschik, Robert; Holst, Christoph-Alexander; Lohweg, Volker
A New Approach to Time Series Anomaly Detection using the wavKAN Architecture Proceedings Article Forthcoming
In: IEEE International Conference on Emerging Technologies and Factory Automation - ETFA 2025, pp. 0, Porto, Portugal, Forthcoming.
@inproceedings{3077,
title = {A New Approach to Time Series Anomaly Detection using the wavKAN Architecture},
author = {Robert Bakschik and Christoph-Alexander Holst and Volker Lohweg},
year = {2025},
date = {2025-12-31},
urldate = {2025-01-01},
booktitle = {IEEE International Conference on Emerging Technologies and Factory Automation - ETFA 2025},
volume = {30},
pages = {0},
address = {Porto, Portugal},
keywords = {},
pubstate = {forthcoming},
tppubtype = {inproceedings}
}
Lammers, Kathrin; Vaquet, Valerie; Vaquet, Jonas; Hammer, Barbara
Realistic Benchmarks for Fair Stream Learning Proceedings Article Forthcoming
In: 32nd International Conference on Neural Information Processing, ICONIP 2025, Forthcoming.
@inproceedings{lammers2025realistic,
title = {Realistic Benchmarks for Fair Stream Learning},
author = {Kathrin Lammers and Valerie Vaquet and Jonas Vaquet and Barbara Hammer},
year = {2025},
date = {2025-12-31},
booktitle = {32nd International Conference on Neural Information Processing, ICONIP 2025},
keywords = {},
pubstate = {forthcoming},
tppubtype = {inproceedings}
}
Lammers, Kathrin; Vaquet, Valerie; Hammer, Barbara
Continuous Fair SMOTE–Fairness-Aware Stream Learning from Imbalanced Data Proceedings Article Forthcoming
In: 34th International Conference on Artificial Neural Networks, ICANN 2025, Forthcoming.
@inproceedings{lammers2025continuous,
title = {Continuous Fair SMOTE–Fairness-Aware Stream Learning from Imbalanced Data},
author = {Kathrin Lammers and Valerie Vaquet and Barbara Hammer},
year = {2025},
date = {2025-12-31},
booktitle = {34th International Conference on Artificial Neural Networks, ICANN 2025},
keywords = {},
pubstate = {forthcoming},
tppubtype = {inproceedings}
}
Kuhl, Ulrike; Bush, Annika
Contextualizing Counterfactuals: Gender Differences in Alignment with Biased (X)AI Proceedings Article Forthcoming
In: 3rd TRR 318 Conference: Contextualizing Explanations (ContEx25), Forthcoming.
@inproceedings{nokey,
title = {Contextualizing Counterfactuals: Gender Differences in Alignment with Biased (X)AI},
author = {Ulrike Kuhl and Annika Bush},
year = {2025},
date = {2025-12-31},
urldate = {2025-01-01},
booktitle = {3rd TRR 318 Conference: Contextualizing Explanations (ContEx25)},
keywords = {},
pubstate = {forthcoming},
tppubtype = {inproceedings}
}
Sharma, Arnab; Kouagou, N'Dah Jean; Ngomo, Axel-Cyrille Ngonga
Resilience in Knowledge Graph Embeddings Journal Article
In: Transactions on Graph Data and Knowledge, vol. 3, no. 2, pp. 1:1–1:38, 2025, ISSN: 2942-7517.
@article{sharma_et_al:TGDK.3.2.1,
title = {Resilience in Knowledge Graph Embeddings},
author = {Arnab Sharma and N'Dah Jean Kouagou and Axel-Cyrille Ngonga Ngomo},
url = {https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.2.1},
doi = {10.4230/TGDK.3.2.1},
issn = {2942-7517},
year = {2025},
date = {2025-10-15},
urldate = {2025-01-01},
journal = {Transactions on Graph Data and Knowledge},
volume = {3},
number = {2},
pages = {1:1–1:38},
publisher = {Schloss Dagstuhl – Leibniz-Zentrum für Informatik},
address = {Dagstuhl, Germany},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Bäumer, Frederik; Brandt-Pook, Hans; Huppert, Christian; Šir, Yves; Stahlkopf, Alexander; Sang, Florian; Ullrich, Nadja
Für eine inklusive digitale Zukunft: Ableismus-sensible Chatbots gestalten Proceedings Article
In: KI2025@HSBI Zukunft im Fokus, Bielefeld, 2025.
@inproceedings{6182,
title = {Für eine inklusive digitale Zukunft: Ableismus-sensible Chatbots gestalten},
author = {Frederik Bäumer and Hans Brandt-Pook and Christian Huppert and Yves Šir and Alexander Stahlkopf and Florian Sang and Nadja Ullrich},
doi = {10.60802/sidas.2025.2},
year = {2025},
date = {2025-09-10},
urldate = {2025-01-01},
booktitle = {KI2025@HSBI Zukunft im Fokus},
volume = {2},
number = {2025},
address = {Bielefeld},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Ali, Manzoor; Speck, René; Zahera, Hamada M.; Saleem, Muhammad; Moussallem, Diego; Ngomo, Axel-Cyrille Ngonga
Multilingual Relation Extraction: A Survey Journal Article
In: IEEE Access, vol. 13, pp. 151907-151933, 2025.
@article{11145032,
title = {Multilingual Relation Extraction: A Survey},
author = {Manzoor Ali and René Speck and Hamada M. Zahera and Muhammad Saleem and Diego Moussallem and Axel-Cyrille Ngonga Ngomo},
doi = {10.1109/ACCESS.2025.3604258},
year = {2025},
date = {2025-08-29},
urldate = {2025-01-01},
journal = {IEEE Access},
volume = {13},
pages = {151907-151933},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Herzig, T; Marschner, C; Ostrau, C; Held, S; Rickermann, J; Schenck, W; Uphaus, A; Amelung, R
Softwaregestützte Analyse geriatrischer Entlassbriefe Journal Article
In: Z. Gerontol. Geriatr., 2025.
@article{Herzig2025-au,
title = {Softwaregestützte Analyse geriatrischer Entlassbriefe},
author = {T Herzig and C Marschner and C Ostrau and S Held and J Rickermann and W Schenck and A Uphaus and R Amelung},
doi = {10.1007/s00391-025-02478-6},
year = {2025},
date = {2025-08-18},
urldate = {2025-08-01},
journal = {Z. Gerontol. Geriatr.},
publisher = {Springer Science and Business Media LLC},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Palomino, Alonso; Buschhüter, David; Roller, Roland; Pinkwart, Niels; Paassen, Benjamin
EdTec-ItemGen: Enhancing Retrieval-Augmented Item Generation Through Key Point Extraction Proceedings Article
In: Zhang, Yuji; Chen, Canyu; Li, Sha; Geva, Mor; Han, Chi; Wang, Xiaozhi; Feng, Shangbin; Gao, Silin; Augenstein, Isabelle; Bansal, Mohit; Li, Manling; Ji, Heng (Ed.): Proceedings of the 3rd Workshop on Towards Knowledgeable Foundation Models (KnowFM), pp. 14–25, Association for Computational Linguistics, Vienna, Austria, 2025, ISBN: 979-8-89176-283-1.
@inproceedings{palomino-etal-2025-edtec,
title = {EdTec-ItemGen: Enhancing Retrieval-Augmented Item Generation Through Key Point Extraction},
author = {Alonso Palomino and David Buschhüter and Roland Roller and Niels Pinkwart and Benjamin Paassen},
editor = {Yuji Zhang and Canyu Chen and Sha Li and Mor Geva and Chi Han and Xiaozhi Wang and Shangbin Feng and Silin Gao and Isabelle Augenstein and Mohit Bansal and Manling Li and Heng Ji},
url = {https://aclanthology.org/2025.knowllm-1.2/},
doi = {10.18653/v1/2025.knowllm-1.2},
isbn = {979-8-89176-283-1},
year = {2025},
date = {2025-08-01},
urldate = {2025-08-01},
booktitle = {Proceedings of the 3rd Workshop on Towards Knowledgeable Foundation Models (KnowFM)},
pages = {14–25},
publisher = {Association for Computational Linguistics},
address = {Vienna, Austria},
abstract = {A major bottleneck in exam construction involves designing test items (i.e., questions) that accurately reflect key content from domain-aligned curricular materials. For instance, during formative assessments in vocational education and training (VET), exam designers must generate updated test items that assess student learning progress while covering the full breadth of topics in the curriculum. Large language models (LLMs) can partially support this process, but effective use requires careful prompting and task-specific understanding. We propose a new key point extraction method for retrieval-augmented item generation that enhances the process of generating test items with LLMs. We exhaustively evaluated our method using a TREC-RAG approach, finding that prompting LLMs with key content rather than directly using full curricular text passages significantly improves item quality regarding key information coverage by 8%. To demonstrate these findings, we release EdTec-ItemGen, a retrieval-augmented item generation demo tool to support item generation in education.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
A major bottleneck in exam construction involves designing test items (i.e., questions) that accurately reflect key content from domain-aligned curricular materials. For instance, during formative assessments in vocational education and training (VET), exam designers must generate updated test items that assess student learning progress while covering the full breadth of topics in the curriculum. Large language models (LLMs) can partially support this process, but effective use requires careful prompting and task-specific understanding. We propose a new key point extraction method for retrieval-augmented item generation that enhances the process of generating test items with LLMs. We exhaustively evaluated our method using a TREC-RAG approach, finding that prompting LLMs with key content rather than directly using full curricular text passages significantly improves item quality regarding key information coverage by 8%. To demonstrate these findings, we release EdTec-ItemGen, a retrieval-augmented item generation demo tool to support item generation in education.
Hinder, Fabian; Vaquet, Valerie; Hammer, Barbara
Adversarial Attacks for Drift Detection Proceedings Article
In: ESANN 2025 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, 2025, ISBN: 9782875870933.
@inproceedings{Hinder2025Adversarial,
title = {Adversarial Attacks for Drift Detection},
author = {Hinder, Fabian and Vaquet, Valerie and Barbara Hammer},
doi = {https://doi.org/10.14428/esann/2025.ES2025-82},
isbn = {9782875870933},
year = {2025},
date = {2025-04-25},
booktitle = {ESANN 2025 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Roberts, Isaac; Hinder, Fabian; Vaquet, Valerie; Schulz, Alexander; Hammer, Barbara
Conceptualizing Concept Drift Proceedings Article
In: ESANN 2025 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, 2025, ISBN: 9782875870933.
@inproceedings{roberts2025conceptualizing,
title = {Conceptualizing Concept Drift},
author = {Isaac Roberts and Fabian Hinder and Valerie Vaquet and Alexander Schulz and Barbara Hammer},
doi = {https://doi.org/10.14428/esann/2025.ES2025-117},
isbn = {9782875870933},
year = {2025},
date = {2025-04-25},
urldate = {2025-01-01},
booktitle = {ESANN 2025 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Vaquet, Valerie; Vaquet, Jonas; Hinder, Fabian; Hammer, Barbara
Compression-based kNN for Class Incremental Continual Learning Proceedings Article
In: ESANN 2025 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, 2025, ISBN: 9782875870933.
@inproceedings{vaquet2025compression,
title = {Compression-based kNN for Class Incremental Continual Learning},
author = {Valerie Vaquet and Jonas Vaquet and Fabian Hinder and Barbara Hammer},
doi = {https://doi.org/10.14428/esann/2025.ES2025-75},
isbn = {9782875870933},
year = {2025},
date = {2025-04-25},
urldate = {2025-01-01},
booktitle = {ESANN 2025 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Sanaullah,; Honda, H.; Schneider, A.; Waßmuth, J.; Jungeblut, T.
Automating Neural Model Selection in Spiking Neural Networks Using AutoML Techniques Proceedings Article
In: 2025 22nd International Learning and Technology Conference (L&T), pp. 274-279, 2025.
@inproceedings{sanaullah_2025_automating,
title = {Automating Neural Model Selection in Spiking Neural Networks Using AutoML Techniques},
author = {Sanaullah and H. Honda and A. Schneider and J. Waßmuth and T. Jungeblut},
doi = {10.1109/LT64002.2025.10941536},
year = {2025},
date = {2025-04-01},
urldate = {2025-01-01},
booktitle = {2025 22nd International Learning and Technology Conference (L&T)},
pages = {274-279},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Srivastava, Nikit; Kuchelev, Denis; Ngoli, Tatiana Moteu; Shetty, Kshitij; Roeder, Michael; Zahera, Hamada; Moussallem, Diego; Ngomo, Axel-Cyrille Ngonga
LOLA–An Open-Source Massively Multilingual Large Language Model Proceedings Article
In: Proceedings of the 31st International Conference on Computational Linguistics, pp. 6420–6446, 2025.
@inproceedings{srivastava2025lola,
title = {LOLA–An Open-Source Massively Multilingual Large Language Model},
author = {Nikit Srivastava and Denis Kuchelev and Tatiana Moteu Ngoli and Kshitij Shetty and Michael Roeder and Hamada Zahera and Diego Moussallem and Axel-Cyrille Ngonga Ngomo},
url = {https://aclanthology.org/2025.coling-main.428/},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Proceedings of the 31st International Conference on Computational Linguistics},
pages = {6420–6446},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}