Formalising causal inference as prediction on a target population
A different way of modelling causal inference.
A different way of modelling causal inference.
How to understand probability.
We should model ML without true distributions, at least in social settings.
ML research oversimplifies race and we need to learn how to move beyond categories.
Doh, M., Höltgen, B., Riccio, P., Oliver, N.: "Position: The categorization of race in ML is a flawed premise." ICML. 2025.
Using protected attributes can increase disparate impact without increasing accuracy.
Höltgen, B., Oliver, N.: "Reconsidering fairness through unawareness from the perspective of model multiplicity." ACM EAAMO. 2025.
Exploring the concept of calibration and different ways of measuring it.
Höltgen, B., Williamson, R.C.: "On the richness of calibration." ACM FAccT. 2023.
An online batch selection algorithm for time-efficient training of large ML models.
Mindermann, S., Brauner, J., Razzak, M., Sharma, M., Kirsch, W., Xu, W., Höltgen, B., Gomez, A.N., Morisot, A., Farquhar, S., Gal, Y.: "Prioritized training on points that are learnable, worth learning, and not yet learned." ICML. 2022.
An algorithm for generating counterfactual explanations. My second master thesis.
Höltgen, B., Schut, L., Brauner, J., Gal, Y.: "DeDUCE: Generating counterfactual explanations efficiently." NeurIPS Workshop: eXplainable AI approaches for debugging and diagnosis. 2021.
An algorithm for causal representation learning and a reflection on causal variables. My first master thesis.
Höltgen, B.: "Encoding causal macrovariables." NeurIPS Workshop: Causal Inference & Machine Learning: Why now?. 2021.
The title says it all.
Heinzelmann, N., Höltgen, B., and Tran, V.: "Moral discourse boosts confidence in moral judgments." Philosophical Psychology 4:8, 1192-1216. 2021.
Listening to scientific rockstars is not good for science.
Höltgen, B.: "Structure-sensitive testimonial norms." European Journal for Philosophy of Science 11:80. 2021.