Wir, das Institute of AI in Management an der LMU München, sind begeistert von KI und den dynamischen Entwicklungen in diesem Bereich. Deshalb möchten wir Einblicke aus erster Hand in die aktuelle Forschungsarbeit von angesehenen Wissenschaftler:innen aus der ganzen Welt, geben. Wir freuen uns jedes Semester großartige Gastredner:innen für unsere Vortragsreihe gewinnen zu können.
Alle Vorträge finden online statt und sind per Zoom erreichbar für jeden, der Interesse hat. Unser Ziel ist es einen Überblick über aktuelle Trends in der KI Forschung zu geben. Die Vorträge finden immer am Donnerstag statt und bestehen aus ca. 45-60 Minuten Präsentation, gefolgt von Diskussion, Feedback und Q&A. Wir freuen uns darauf, Sie herzlich begrüßen zu dürfen.
Alle Informationen zu Terminen, Gastredner:innen und ihren Themen, inkl. Zoom Links werden im Laufe der Zeit auf dieser Veranstaltungsseite veröffentlicht.
Sie sind herzlich eingeladen sich für unseren Newsletter anzumelden, über den wir alle kommenden Veranstaltungen dieser Serie kommunizieren. Hier geht es zu unserer Anmeldeseite.
Die Terminserie ist eine gemeinsame Initiative mit Partnern von führenden nationalen Universitäten, unter der Leitung von Prof. Stefan Feuerriegel, LMU München:
- Prof. Markus Weinmann, University of Cologne
- Prof. Stefan Lessmann, Humboldt University Berlin
- Prof. Mathias Kraus, Friedrich-Alexander University Erlangen-Nuremberg
- Prof. Niklas Kühl, University of Bayreuth
- Dr. Michael Vössing, Karlsruhe Institute of Technology
- Prof. Oliver Müller, University of Paderborn
- Prof. Nicolas Pröllochs, Justus-Liebig-University Gießen
- Prof. Christian Janiesch, TU Dortmund
- Prof. Gunther Gust, University of Würzburg
- Prof. Tobias Brandt, University of Münster
- Prof. Yash Raj Shrestha, University of Lausanne
- Prof. Burkhardt Funk, Leuphana University Lüneburg
- Prof. Nadja Klein, TU Dortmund
- Prof. Martin Spindler, University of Hamburg
- Prof. Niki Kilbertus, TU Munich
- Prof. Stefan Bauer, TU Munich
- Prof. Henner Gimpel, University of Hohenheim
- Prof. Alexander Benlian, TU Darmstadt
- Prof. Oliver Hinz, Goethe University, Frankfurt
- Prof. Ekaterina Jussupow, TU Darmstadt
- Prof. Anne-Sophie Mayer, Vrije Universiteit Amsterdam
Unsere vergangene Serie
Do. 08.08.2024
Gastredner: Prof. Michael Oberst, Johns Hopkins University
Präsentation: Auditing Fairness under Unobserved Confounding
Abstract: Inequity in resource allocation has been well-documented in many domains, such as healthcare. Causal measures of equity / fairness seek to isolate biases in allocation that are not explained by other factors, such as underlying need. However, these fairness measures require the (strong) assumption that we observe all relevant indicators of need, an assumption that rarely holds in practice. For instance, if resources are allocated based on indicators of need that are not recorded in our data ("unobserved confounders"), we may understate (or overstate) the amount of inequity. In this talk, I will present work demonstrating that we can still give informative bounds on certain causal measures of fairness, even while relaxing (or even eliminating) the assumption that all relevant indicators of need are observed. We use the fact that in many real-world settings (e.g., the release of a new treatment) we have data from prior to any allocation, which can be used to derive unbiased estimates of need. This result is of immediate practical interest: we can audit unfair outcomes of existing decision-making systems in a principled manner. For instance, in a real-world study of Paxlovid allocation, we show that observed racial inequity cannot be explained by unobserved confounders of the same strength as important observed covariates. (https://arxiv.org/abs/2403.14713)
About the speaker: Michael Oberst is an Assistant Professor of Computer Science at Johns Hopkins. His research focuses on making sure that machine learning in healthcare is safe and effective, using tools from causal inference and statistics. His work has been published at a range of machine learning venues (NeurIPS / ICML / AISTATS / KDD), including work with clinical collaborators from Mass General Brigham, NYU Langone, and Beth Israel Deaconess Medical Center. He has also worked on clinical applications of machine learning, including work on learning effective antibiotic treatment policies (published in Science Translational Medicine). He earned his undergraduate degree in Statistics at Harvard, and his PhD in Computer Science at MIT. Prior to joining Johns Hopkins, he was a postdoctoral associate in the Machine Learning Department at Carnegie Mellon University.
Do. 18.07.2024
Gastredner: Prof. Yixin Wang, University of Michigan
Präsentation: Representation Learning: A Causal Perspective
Abstract: Representation learning constructs low-dimensional representations to
summarize essential features of high-dimensional data like images and
texts. Ideally, such a representation should efficiently capture
non-spurious features of the data. It shall also be disentangled so
that we can interpret what feature each of its dimensions captures.
However, these desiderata are often intuitively defined and
challenging to quantify or enforce.
In this talk, we take on a causal perspective of representation
learning. We show how desiderata of representation learning can be
formalized using counterfactual notions, enabling metrics and
algorithms that target efficient, non-spurious, and disentangled
representations of data. We discuss the theoretical underpinnings of
the algorithm and illustrate its empirical performance in both
supervised and unsupervised representation learning.
This is joint work with Kartik Ahuja, Yoshua Bengio, Michael Jordan,
Divyat Mahajan, and Amin Mansouri.
[1] https://arxiv.org/abs/2109.03795
[2] https://arxiv.org/abs/2209.11924
[3] https://arxiv.org/abs/2310.02854
Yixin Wang is an assistant professor of statistics at the University
of Michigan. She works in the fields of Bayesian statistics, machine
learning, and causal inference. Previously, she was a postdoctoral
researcher with Professor Michael Jordan at the University of
California, Berkeley. She completed her PhD in statistics at Columbia,
advised by Professor David Blei, and her undergraduate studies in
mathematics and computer science at the Hong Kong University of
Science and Technology. Her research has been recognized by the j-ISBA
Blackwell-Rosenbluth Award, ICSA Conference Young Researcher Award,
ISBA Savage Award Honorable Mention, ACIC Tom Ten Have Award Honorable
Mention, and INFORMS data mining and COPA best paper awards.
Do. 06.06.2024
Gastredner: Prof. Fredrik Johansson, Chalmers University of Technology
Präsentation: Interpretable Prediction with Missing Values
Abstract: Missing values plague many application domains of machine learning, both in training data and in deployment. Healthcare is just one example—patient records are notorious for omissions of important variables and collecting them during clinical practice can be costly and time consuming. Healthcare also tends to demand interpretability so that predictions can be quickly calculated and justified, often using rule-based risk scores. Surprisingly, prediction with missing values and interpretability are largely incompatible using classical methods. Imputation obfuscates predictions and algorithms designed for interpretability typically have no native handling of prediction with missing values. In this talk, I will introduce two solutions to this problem, suitable under different conditions, and propose directions for future work.
Do. 23.05.2024
Gastredner: Prof. Qian Yang, Cornell University
Präsentation: Innovating AI Products for Social Good in the Age of Foundational Models
Abstract: Accounting for AI's unintended consequences—whether misinformation on social media or issues of fairness and social justice—-increasingly requires AI systems designers to go beyond immediate user experiences of the system and consider human-AI interactions at a societal scale. The increasing ubiquity of large pre-trained language models (LLMs) further exacerbates this trend. So, how does LLM change the way we, AI product designers and human-AI interaction researchers, work? How might we work to innovate LLM applications for social good? In this talk, Professor Qian Yang draws upon her lab’s research on LLMs for education and for mental healthcare and explores these questions.