We, the Institute of AI in Management at LMU Munich, are excited about AI in management and the dynamic developments in this field. That is why we would like to provide first-hand insights into the latest research work, granted by high profile scientists from all over the world. We are very honoured to be able to win great guest speakers for the keynotes every semester.
All session will be available via Zoom for everyone who is interested. We aim to provide an overview of current trends in AI research. The sessions, on Thursdays, consist of 45-60 minutes of presentation, followed by discussion, feedback and QA. We are looking forward to seeing you there.
All information on dates, times, speakers and their topics, incl. Zoom links will be published on this event page when we get closer to the dates.
You are invited to sign up for our newsletter, through which we communicate all upcoming events in this series. Please follow the link to our registration page.
The series is a joint initiative, led by LMU Munich (Prof. Stefan Feuerriegel) together with co-hosts from leading national universities:
- Prof. Markus Weinmann, University of Cologne
- Prof. Stefan Lessmann, Humboldt University Berlin
- Prof. Mathias Kraus, Friedrich-Alexander University Erlangen-Nuremberg
- Prof. Niklas Kühl, University of Bayreuth
- Dr. Michael Vössing, Karlsruhe Institute of Technology
- Prof. Oliver Müller, University of Paderborn
- Prof. Nicolas Pröllochs, Justus-Liebig-University Gießen
- Prof. Christian Janiesch, TU Dortmund
- Prof. Gunther Gust, University of Würzburg
- Prof. Tobias Brandt, University of Münster
- Prof. Yash Raj Shrestha, University of Lausanne
- Prof. Burkhardt Funk, Leuphana University Lüneburg
- Prof. Nadja Klein, TU Dortmund
- Prof. Martin Spindler, University of Hamburg
- Prof. Niki Kilbertus, TU Munich
- Prof. Stefan Bauer, TU Munich
- Prof. Henner Gimpel, University of Hohenheim
Click below to see our past speakers and topics:
Thursday 08.08.2024
Guest Speaker: Prof. Michael Oberst, Johns Hopkins University
Presentation: Auditing Fairness under Unobserved Confounding
Time: 16:00 CEST
Language: English
Zoom: Link
Abstract: Inequity in resource allocation has been well-documented in many domains, such as healthcare. Causal measures of equity / fairness seek to isolate biases in allocation that are not explained by other factors, such as underlying need. However, these fairness measures require the (strong) assumption that we observe all relevant indicators of need, an assumption that rarely holds in practice. For instance, if resources are allocated based on indicators of need that are not recorded in our data ("unobserved confounders"), we may understate (or overstate) the amount of inequity. In this talk, I will present work demonstrating that we can still give informative bounds on certain causal measures of fairness, even while relaxing (or even eliminating) the assumption that all relevant indicators of need are observed. We use the fact that in many real-world settings (e.g., the release of a new treatment) we have data from prior to any allocation, which can be used to derive unbiased estimates of need. This result is of immediate practical interest: we can audit unfair outcomes of existing decision-making systems in a principled manner. For instance, in a real-world study of Paxlovid allocation, we show that observed racial inequity cannot be explained by unobserved confounders of the same strength as important observed covariates. (https://arxiv.org/abs/2403.14713)
About the speaker: Michael Oberst is an Assistant Professor of Computer Science at Johns Hopkins. His research focuses on making sure that machine learning in healthcare is safe and effective, using tools from causal inference and statistics. His work has been published at a range of machine learning venues (NeurIPS / ICML / AISTATS / KDD), including work with clinical collaborators from Mass General Brigham, NYU Langone, and Beth Israel Deaconess Medical Center. He has also worked on clinical applications of machine learning, including work on learning effective antibiotic treatment policies (published in Science Translational Medicine). He earned his undergraduate degree in Statistics at Harvard, and his PhD in Computer Science at MIT. Prior to joining Johns Hopkins, he was a postdoctoral associate in the Machine Learning Department at Carnegie Mellon University.
Thursday 18.07.2024
Guest Speaker: Prof. Yixin Wang, University of Michigan
Presentation: Representation Learning: A Causal Perspective
Abstract: Representation learning constructs low-dimensional representations to
summarize essential features of high-dimensional data like images and
texts. Ideally, such a representation should efficiently capture
non-spurious features of the data. It shall also be disentangled so
that we can interpret what feature each of its dimensions captures.
However, these desiderata are often intuitively defined and
challenging to quantify or enforce.
In this talk, we take on a causal perspective of representation
learning. We show how desiderata of representation learning can be
formalized using counterfactual notions, enabling metrics and
algorithms that target efficient, non-spurious, and disentangled
representations of data. We discuss the theoretical underpinnings of
the algorithm and illustrate its empirical performance in both
supervised and unsupervised representation learning.
This is joint work with Kartik Ahuja, Yoshua Bengio, Michael Jordan,
Divyat Mahajan, and Amin Mansouri.
[1] https://arxiv.org/abs/2109.03795
[2] https://arxiv.org/abs/2209.11924
[3] https://arxiv.org/abs/2310.02854
Yixin Wang is an assistant professor of statistics at the University
of Michigan. She works in the fields of Bayesian statistics, machine
learning, and causal inference. Previously, she was a postdoctoral
researcher with Professor Michael Jordan at the University of
California, Berkeley. She completed her PhD in statistics at Columbia,
advised by Professor David Blei, and her undergraduate studies in
mathematics and computer science at the Hong Kong University of
Science and Technology. Her research has been recognized by the j-ISBA
Blackwell-Rosenbluth Award, ICSA Conference Young Researcher Award,
ISBA Savage Award Honorable Mention, ACIC Tom Ten Have Award Honorable
Mention, and INFORMS data mining and COPA best paper awards.
Thursday 06.06.2024
Guest Speaker: Prof. Fredrik Johansson, Chalmers University of Technology
Presentation: Interpretable Prediction with Missing Values
Abstract: Missing values plague many application domains of machine learning, both in training data and in deployment. Healthcare is just one example—patient records are notorious for omissions of important variables and collecting them during clinical practice can be costly and time consuming. Healthcare also tends to demand interpretability so that predictions can be quickly calculated and justified, often using rule-based risk scores. Surprisingly, prediction with missing values and interpretability are largely incompatible using classical methods. Imputation obfuscates predictions and algorithms designed for interpretability typically have no native handling of prediction with missing values. In this talk, I will introduce two solutions to this problem, suitable under different conditions, and propose directions for future work.
Thursday 23.05.2024
Guest Speaker: Prof. Qian Yang, Cornell University
Präsentation: Innovating AI Products for Social Good in the Age of Foundational Models
Abstract: Accounting for AI's unintended consequences—whether misinformation on social media or issues of fairness and social justice—-increasingly requires AI systems designers to go beyond immediate user experiences of the system and consider human-AI interactions at a societal scale. The increasing ubiquity of large pre-trained language models (LLMs) further exacerbates this trend. So, how does LLM change the way we, AI product designers and human-AI interaction researchers, work? How might we work to innovate LLM applications for social good? In this talk, Professor Qian Yang draws upon her lab’s research on LLMs for education and for mental healthcare and explores these questions.