We are thrilled to announce the keynote speakers of EWAF'25!
Professor of Digital Societies at the University of Graz
Research on algorithmic fairness promotes a view on algorithmic systems as black boxes that need to be “opened” and “unpacked” to make their inner workings accessible. Understanding the black box as a mode of inquiry and knowledge making practice (rather than a thing), I will explore what exactly researchers and practitioners aim to unpack when they examine algorithmic black boxes, what they consider to be constitutive elements of these black boxes, and what is othered or perceived as “monstrous”. Following scholarship in the social studies of science and technology (STS), the notion of the monster captures what is considered irrelevant for the constitution and inner workings of an algorithmic black box. It is what is excluded and escapes analysis. I will argue, however, that attention to the outer limits of algorithmic black boxes allows us to explore how social actors, temporalities, places, imaginaries, practices, and values matter for knowledge making about algorithmic fairness. In my talk, I will review three modes of assembling black boxes of machine learning (ML)-based systems: (1) the black box of ML data, (2) the black box of ML algorithms and trained models and (3) the black box of ML-based systems in practice. In reassembling these three distinct ML black boxes, I demonstrate how generative engagements with algorithmic black boxes and their monsters are for the critical inquiry of algorithmic fairness.
Juliane Jarke is Professor of Digital Societies at the University of Graz. Her research attends to the transformative power of algorithmic systems in the public sector, education and for ageing populations. It intersects critical data & algorithm studies, participatory (design) research and feminist STS.
Juliane received her PhD from Lancaster University and has a background in Computer Science, Philosophy, and STS. She has recently co-edited a special issue on Care-ful Data Studies (Information, Communication and Society). Her latest co-edited books include Algorithmic Regimes: Methods, Interactions and Politics (Amsterdam University Press) and Dialogues in Data Power: Shifting Response-abilities in a Datafied World (Bristol University Press).
Juliane is also co-organiser of the Data Power Conference series and Co-PI in the research unit Communicative AI: The Automation of Societal Communication. More on www.sociodigitalfutures.info
Associate Professor and Director of R/AI at New York University
In this talk, I will examine the widespread use of AI systems in hiring and employment, highlighting where these tools can be helpful and where they raise concerns around discrimination and validity. I will focus on three lines of work:
First, I will provide an overview of fairness in ranking, offering a perspective that connects formal definitions and algorithmic techniques to the value frameworks that motivate fairness interventions, and to the technical choices that shape the behavior and outcomes of these methods.
Second, I will discuss algorithmic recourse, which aims to help individuals reverse negative decisions—such as being screened out by an automated hiring system. I will highlight recent work exploring how resource constraints (i.e., a limited number of favorable outcomes) and competition influence the reliability and fairness of recourse over time.
Third, I will present findings from an audit of two commercial systems—Humantic AI and Crystal—which claim to infer job-seeker personality traits from resumes and social media data. I will describe our audit methodology and show that both systems exhibit instability in key measurement facets, rendering them unsuitable as valid instruments for pre-hire assessment.
I will conclude with a discussion of emerging legal and regulatory developments in the U.S. aimed at curbing the unaccountable use of AI in hiring, and reflect on what it would take to ensure these systems are safe, fair, and socially sustainable in this critical domain.
Julia is an Institute Associate Professor of Computer Science and Engineering at the Tandon School of Engineering, Associate Professor of Data Science at the Center for Data Science, and Director of the Center for Responsible AI. Julia’s goal is to make “Responsible AI” synonymous with “AI”. She works towards this goal by engaging in academic research, education and technology policy, and by speaking about the benefits and harms of AI to practitioners and members of the public.
Julia’s research interests include AI ethics and legal compliance, and data management and AI systems. In addition to academic publications, she has written for the New York Times, the Wall Street Journal, and Le Monde. Julia has been teaching courses on responsible data science and AI to students, practitioners and the general public. She is a co-author of “Data, Responsibly”, an award-winning comic book series for data science enthusiasts, and “We are AI”, a comic book series for the general audience.
Julia is engaged in technology policy and regulation in the US and internationally, having served on the New York City Automated Decision Systems Task Force, by mayoral appointment, among other roles.
Julia received her M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst. She is a recipient of the NSF CAREER Award and a Senior Member of the ACM.
Assistant Professor at Sciences Po Law School
This talk examines the rise of the digital welfare state in Europe and its implications for social justice, focusing particularly on the discriminatory mechanisms of algorithmic governance. While presented as tools for modernizing public administration and optimizing resource allocation, automated decision-making systems used in welfare programs often institutionalize exclusion, amplify existing inequalities, and erode fundamental rights. Drawing on cases from Europe, the talk highlights how semi-automated fraud detection systems operate opaquely and ineffectively, disproportionately targeting marginalized populations under the guise of efficiency and fraud prevention. Central to the argument is the application of an intersectional lens — rooted in Black feminist thought — which reveals how these systems perpetuate complex forms of discrimination along axes such as gender, race, disability, and economic status. The analysis critiques existing legal frameworks for failing to address systemic and intersectional forms of algorithmic discrimination and calls for a rethinking of legal interventions.
Raphaële Xenidis is assistant professor in European law at Sciences Po Law School in Paris, France. Her current research focuses on algorithmic bias and discrimination in automated decision-making systems in the context of European equality law.
Raphaële holds a PhD in law from the European University Institute and is an Honorary Fellow at the University of Edinburgh Law School and a Global Fellow at iCourts, University of Copenhagen.
Assistant Professor at Syracuse University
In this talk I examine a fundamental dilemma facing intersectional algorithmic fairness in practice. As regulatory frameworks like the EU AI Act require intersectional approaches to fairness, we confront two interrelated problems that threaten meaningful implementation.
First, the problem of statistical uncertainty: When considering intersectional groups—as opposed to groups defined by a single (demographic) attribute—the number of groups increases exponentially while sample sizes decrease dramatically. This creates statistical uncertainty that renders standard fairness metrics meaningless at best, or morally problematic at worst. Second, the problem of ontological uncertainty: The question of which groups warrant fairness consideration remains theoretically underdetermined. The needed ontological theory would yield 𝒢, the set of all groups that are to be included in a fairness audit. The options for such theories include that 𝒢 consist of (a) groups generated from all possible combinations of protected attributes, (b) some relevant combination, or (c) some relevant groups, such as those groups with a history of disadvantage. This theoretical ambiguity enables “fairness gerrymandering”, that is, strategically defining 𝒢 to achieve desired outcomes.
These problems generate a dilemma. If we include all intersectional groups, statistical uncertainty makes meaningful fairness audits impossible. If we restrict attention to select groups, we face a tension between accommodating ontological uncertainty on the one hand and preventing fairness gerrymandering on the other. Rather than viewing this as grounds for abandoning intersectional fairness, I hypothesize that more work is needed to identify: (1) new approaches to fairness auditing that explicitly account for statistical uncertainty about small groups, and (2) clearer theoretical principles for determining relevant group ontologies.
Johannes Himmelreich is a philosopher who teaches and works in a policy school. He is an Assistant Professor in Public Administration and International Affairs in the Maxwell School at Syracuse University. He works in the areas of political philosophy, applied ethics, and philosophy of science. Currently, he researches the ethical quandaries that data scientists face, how the government should use AI, and how to check for algorithmic fairness under uncertainty.
He published papers on “Responsibility for Killer Robots,” the trolley problem and the ethics of self-driving cars, as well as on the role of embodiment in virtual reality.
He holds a PhD in Philosophy from the London School of Economics (LSE). Prior to joining Syracuse, he was a post-doctoral fellow at Humboldt University in Berlin and at in the McCoy Family Center for Ethics in Society at Stanford University. During his time in Silicon Valley, he consulted on tech ethics for Fortune 500 companies, and taught ethics at Apple.