8.00 - 9:00
Registration & Badge Pickup
Conferences desk
9.00 - 10:00
Keynote by Juliane Jarke: "Reassembling the Black Box(es) of Algorithmic Fairness: Of Monsters and the Outer Limits of Knowledge Production"
Blauwe Zaal
Research on algorithmic fairness promotes a view on algorithmic systems as black boxes that need to be “opened” and “unpacked” to make their inner workings accessible. Understanding the black box as a mode of inquiry and knowledge making practice (rather than a thing), I will explore what exactly researchers and practitioners aim to unpack when they examine algorithmic black boxes, what they consider to be constitutive elements of these black boxes, and what is othered or perceived as “monstrous”. Following scholarship in the social studies of science and technology (STS), the notion of the monster captures what is considered irrelevant for the constitution and inner workings of an algorithmic black box. It is what is excluded and escapes analysis. I will argue, however, that attention to the outer limits of algorithmic black boxes allows us to explore how social actors, temporalities, places, imaginaries, practices, and values matter for knowledge making about algorithmic fairness. In my talk, I will review three modes of assembling black boxes of machine learning (ML)-based systems: (1) the black box of ML data, (2) the black box of ML algorithms and trained models and (3) the black box of ML-based systems in practice. In reassembling these three distinct ML black boxes, I demonstrate how generative engagements with algorithmic black boxes and their monsters are for the critical inquiry of algorithmic fairness
10.00 - 10:30
Coffee break
Voorhof/Senaatszaal
10.30 - 11:30
Lightning round 4
Blauwe zaal
Equality insights in the development of fairer high-risk AI systems and the control of its discriminatory impacts
Anna Capellà i Ricart
Optimizing Social Network Interventions via Hypergradient-Based Recommender System Design
Giulia De Pasquale
When and Why are Algorithmic Generalizations Morally Wrong? AI, Negligence and (Dis)Respect for Persons
Cossette-Lefebvre, Hugo
It will be what we want it to be: sociotechnical and contested systemic risk at the core of the EU’s regulation of platforms’ AI systems
Mateus Correia de Carvalho
Auditing recommender systems under the Digital Services Act: emerging evidence and paths forward
Matteo Fabbri
Toward developing a Social Impact Assessment for AI in the public sector (SIA4AI) framework: key design considerations
Vanessa Dirksen
Fares on Fairness: Using a Total Error Framework to Examine the Role of Measurement and Representation in Training Data on Model Fairness and Bias
Christoph Kern
Why am I Still Seeing This: Measuring the Effectiveness Of Ad Controls and Explanations in AI-Mediated Ad Targeting Systems
Jane Castleman
Uncertainty as a Primary Barrier for Trustworthy AI Under the EU AI Act: German SME Perspectives
Simon Jarvers & Chiara Ullstein
11.30 - 12:30
In-depth session 3
Lecture Hall 4 and Lecture Hall 5
Lecture hall 4
Towards a system-theoretic approach to algorithmic (un)fairness
Eva de Winkel
A Five-Phase Framework for Fair Insurance: Reviewing Strategies for Digital Price Differentiation
Rijk Mercuur
Lecture hall 5
Predictions, Performativity, and Potential Outcomes: Communicative Rationality in Prediction-Allocation Problems
Sebastian Zezulka
Standardising Equality in the Algorithmic Society? A Research Agenda
Raphaële Xenidis and Miriam Fahimi
12.30 - 13:30
Lunch
Voorhof/Senaatszaal
13.30 - 14:30
Keynote by Johannes Himmelreich: "Algorithmic Fairness, Intersectionality, and Uncertainty"
Blauwe Zaal
In this talk I examine a fundamental dilemma facing intersectional algorithmic fairness in practice. As regulatory frameworks like the EU AI Act require intersectional approaches to fairness, we confront two interrelated problems that threaten meaningful implementation.
First, the problem of statistical uncertainty: When considering intersectional groups—as opposed to groups defined by a single (demographic) attribute—the number of groups increases exponentially while sample sizes decrease dramatically. This creates statistical uncertainty that renders standard fairness metrics meaningless at best, or morally problematic at worst. Second, the problem of ontological uncertainty: The question of which groups warrant fairness consideration remains theoretically underdetermined. The needed ontological theory would yield 𝒢, the set of all groups that are to be included in a fairness audit. The options for such theories include that 𝒢 consist of (a) groups generated from all possible combinations of protected attributes, (b) some relevant combination, or (c) some relevant groups, such as those groups with a history of disadvantage. This theoretical ambiguity enables “fairness gerrymandering”, that is, strategically defining 𝒢 to achieve desired outcomes.
These problems generate a dilemma. If we include all intersectional groups, statistical uncertainty makes meaningful fairness audits impossible. If we restrict attention to select groups, we face a tension between accommodating ontological uncertainty on the one hand and preventing fairness gerrymandering on the other. Rather than viewing this as grounds for abandoning intersectional fairness, I hypothesize that more work is needed to identify: (1) new approaches to fairness auditing that explicitly account for statistical uncertainty about small groups, and (2) clearer theoretical principles for determining relevant group ontologies.
14.30 - 15:30
Poster session 3
Voorhof/Senaatszaal
Optimizing Social Network Interventions via Hypergradient-Based Recommender System Design
Giulia De Pasquale
How is the Socio-Demographic Background of Researchers in AI & ML Related to the Values reflected in their Research?
Paula Nauta
When Algorithms Play Favorites: Lookism in the Generation and Perception of Faces
Miriam Doh
Re-evaluating the role of refugee integration factors for building more equitable allocation algorithms
Clara Strasser Ceballos
Fairness-Regulated Dense Subgraph Discovery
Emmanouil Kariotakis
Situating and Understanding Machine Unlearning, Ethically
Iqra Aslam
Why am I Still Seeing This: Measuring the Effectiveness Of Ad Controls and Explanations in AI-Mediated Ad Targeting Systems
Jane Castleman
Algorithmic Fairness over the Years - A Scoping Review of Research in Computer Science and Law
Anne Oloo & Daphne Lenders
From Implicit to Explicit Assumptions: Why There is No Fairness Without Bias-Awareness
Marco Favier
The Competing Interests Shaping Article 40
Tim Booker
Uncertainty as a Primary Barrier for Trustworthy AI Under the EU AI Act: German SME Perspectives
Simon Jarvers & Chiara Ullstein
The Disability Gap: Examining Representational Shortcomings in Bias Benchmarking
Arjita Mital
Unmasking Style Sensitivity: A Causal Analysis of Bias Evaluation Instability in Large Language Models
Jiaxu Zhao
Neurosymbolic Models for Trustworthy ADM
Leonhard Kestel
QueerGen: How LLMs Reflect Societal Norms on Gender and Sexuality in Sentence Completion Task
Mae Sosto
Advancing Equal Opportunity Fairness and Group Robustness through Group-Level Cost-Sensitive Deep Learning
Modar Sulaiman
Reliability of the Equal Opportunity Fairness Evaluation under Selective Labeling Bias
Niels Scholten
Towards Demographically Diverse Audio DeepFake Detection
Isabella Manolaki-Sempagios
15.30 - 17:00
Interactive Sessions 3
Auditorium 13, Auditorium 14
Auditorium 13
Beyond the Buzzwords: Co-Creating Accountability through Legal and Technological Perspectives on Algorithmic Transparency
Maria Lorena Flórez Rojas, Eveline van Beem, Matthia Sabatelli, H.M. Veluwenkamp
Auditorium 14
Designing a 'fair' human-in-the-loop
Isabella Banks, Jacqueline Kernahan
17.00 - 18:00
Closing & Town hall
Blauwe Zaal