Dear all,
starting January 2025, 9 doctoral positions are available within our DFG Research Training Group KEMAI (Knowledge Infusion and Extraction for Explainable Medical AI) at Ulm University in Germany.
The KEMAI team aims at combining the benefits of knowledge- and learning-based systems, to not only allow for state-of-the-art accuracy in medical diagnosis, but to also clearly communicate the obtained predictions to physicians, considering ethical implications within the medical decision process. KEMAI’s main purpose is to interdisciplenarily train PhD students from computer science, medicine, and ethics in the area of explainable medical AI. The RTG offers a structured doctoral program that creates an environment in which young scientists can conduct research at the highest level in the field of medical AI. We invite highly motivated candidates with a passion for research and a desire to contribute to an interdisciplinary academic environment to apply for these positions. (The positions are fully funded for 3+1 years and come with an E13 salary.)
Projects include:
Data Exploitation • A1 – Harvesting Medical Guidelines using Pre-trained Language Models Project Leads: Prof. Scherp (Computer Science), Prof. Braun (Computer Science), Dr. Vernikouskaya (Medicine) This project focuses on researching multimodal pre-trained language models (LM) that extract symbolic knowledge on medical diagnosis and treatments from input documents. The models will incorporate structured knowledge and represent extracted information using an extended process ontology. The project applies these models to COVID-19 related imaging and treatments, contributing to OpenClinical’s COVID-19 Knowledge reference model and adapting to various benchmarks. • A2 – Stability Improved Learning with External Knowledge through Contrastive Pre-training Project Leads: Jun.-Prof. Götz (Medicine), Prof. Scherp (Computer Science) This project aims to improve machine learning reliability in small data settings by learning from disconnected datasets using contrastive learning. It investigates if contrastive learning can reduce classifier susceptibility to confounders, reverse confounding effects, and identify out-of-distribution test samples. The project seeks to find approaches that address these technical challenges.
Knowledge Infusion • B2 – Semantic Design Patterns for High-Dimensional Diagnostics Project Leads: Prof. Kestler (Medicine), Prof. M. Beer (Medicine) This project defines semantic design patterns for incorporating SemDK in ML algorithms to improve clinical predictions and tumor characterization. The patterns will be categorized by their mechanisms and knowledge representation, providing guidelines for application. The project evaluates these patterns in image analysis and molecular diagnostics based on high-dimensional data.
Knowledge Extraction • C2 – Learning Search and Decision Mechanisms in Medical Diagnoses Project Leads: Prof. Neumann (Computer Science), Jun.-Prof. Götz (Medicine) This project studies human attentive search and object attention principles for vision-based medical diagnosis. It investigates mechanisms of object-based attention and visual routines for task execution. The goal is to formalize human visual search strategies and integrate them into deep neural networks (DNNs) for improved medical diagnosis.
Model Explanation • D1 – Accountability of AI-based Medical Diagnoses Project Leads: Prof. Steger (Ethics), Prof. Ropinski (Computer Science) This project addresses the ethical analysis of AI system designs for medical diagnoses. It focuses on determining which AI-supported processes need to be explainable and transparent, generating comprehensive information to help users understand AI-driven medical decisions. • D2 – Explainability, Understanding, and Acceptance Requirements Project Leads: Prof. Hufendiek (Philosophy), Prof. Glimm (Computer Science), Dr. Lisson (Medicine) This project applies philosophical insights on understanding and explanations to the use of AI in medical diagnosis. It clarifies the roles of understanding and abductive reasoning in medical diagnosis, identifies conflicts between stakeholders, and suggests ways to develop and integrate AI explanations with human experts' reasoning processes.
Medical PhD Projects (10 months) The outlined PhD projects are complemented by medical PhD projects, which complement the technical and ethical projects, and which are targeted towards medical researchers.
For further information on KEMAI and application please go to https://kemai.uni-ulm.de/
Best regards
Christiane Böhm - Coordinator -
RTG KEMAI Ulm University James-Franck-Ring - O27 Room 321 D-89081 Ulm Germany
phone: +49 731 50 31321 christiane.boehm@uni-ulm.de kemai@uni-ulm.de