A taxonomy of bias: sensemaking, heretical physics, and the Tom Hanks/Bill Murray multiverse
005 | 2025-11-25
Bias has a pretty well-known definition in the world of AI/ML programming. But today’s paper asks us to expand that definition and consider how cultural, organizational, and human forces can intersect with development. The authors of “Identifying and Categorizing Bias in AI/ML for Earth Sciences” argue that, beyond considering bias and before mitigating bias, developers ought to be able to identify bias in all its modes & forms. Their taxonomy of bias is helpful, and inspired us to create an actionable check list of our own.
Paper
Identifying and Categorizing Bias in AI/ML for Earth Sciences, McGovern et al
Chapters
- 00:00:03 - Intro
- 00:02:39 - Abstract
- 00:03:33 - Discussion of framing
- 00:09:03 - A taxonomy of bias 09:03
- 00:47:52 - Our proposed check list for mitigating bias
- 01:08:14 - Reading recs
A check list for mitigating bias (in developing AI or for really any kind of technological development)
- Use the language of “bias”, acknowledging both social and technical bias
- Define your specific user and use case early
- Establish a clear baseline for comparison
- Ensure diversity of perspectives on your team
- Practice reflexivity (question your assumptions, and continue that line of questioning throughout the development process.)
- Examine your incentives vs. end-user incentives
- Prioritize transparency (in your decision-making and in your incentives & goals.)
Recommended reading
- AI2ES Newsletter
- Groundhog Day (film AND musical theater adaptation)
- “Science as a Vocation” by Max Weber
- Book of the New Sun by Gene Wolfe (four-book series)
- Infinite Powers by Steven Strogatz