We should avoid the assumption of data-generating probability distributions in social settings
In Progress.
Abstract:
Machine Learning research, including work promoting fair or equitable algorithms, heavily relies on the concept of a data-generating probability distribution. The standard presumption is that since data points are `sampled from’ such a distribution, one can learn from observed data about this distribution and, thus, predict future data points which are also drawn from it. We argue, however, that such true probability distributions do not exist and should not be dealt with uncritically. We show that alternative frameworks focusing directly on relevant populations rather than abstract distributions are available and leave classical learning theory almost unchanged. Furthermore, we argue that the assumption of true probabilities or data-generating distributions can be misleading and obscure both the choices made and the goals pursued in machine learning practice. Based on these considerations, this position paper argues that, at least in social settings, machine learning work should avoid assuming data-generating probability distributions.
Höltgen, B., Williamson, R.C.: "Five reasons against assuming a data-generating distribution in Machine Learning." ICML Workshop: Humans, Algorithmic Decision-Making and Society. 2024.