Panel: Algorithmic citizen profiling and risk-scoring in the welfare state: civil society perspectives and litigation
Moderators: Doris Allhutter, Karolina Sztandar-Sztanderska
Panelists:
Hajira Maryam (Amnesty International)
Soizic Penicaud (Observatory of Public Algorithms - Odap.fr)
Tijmen Wisman (Stichting Platform Bescherming Burgerrechten, Vrije Universiteit Amsterdam)
Mateusz Wrotny (Panoptykon Foundation)
The introduction of algorithmic citizen profiling and risk-scoring for purposes such as welfare fraud detection and the allocation of social services has become a widespread practice across Europe. While governments, for instance, introduce laws that oblige social security bodies to prevent social fraud, civil society organizations, data journalists and researchers raise concerns that biased, non-transparent automated risk assessment and profiling affect citizens' privacy and social rights. NGOs demand insight into algorithmic practices used to segment access to welfare or to flag citizens in vulnerable life situations as potential fraudsters. In the Netherlands and France, NGOs and coalitions between data rights organizations, organizations representing disadvantaged groups and unions have taken legal action to defend equal access to social security and the right of citizens to social protection.
This panel brings together representatives from different organizations, interest groups and journalism to discuss how algorithmic systems affect data protection and social rights and to deliberate on strategies that have been successful in gaining transparency and organizing resistance. After the panel discussion, the audience is invited to join the exchange on cases and experiences from different national contexts. The aim of this interactive session is to facilitate a dialogue between the algorithmic fairness community, civil society groups and data journalists to fathom organizing within and across national contexts.
Redefining AI Fairness Through an Indigenous Lens
Myra Colis
What does it mean for an AI system to be “fair”, and who gets to decide? What are we really looking for when we talk about fairness, and who is included in that “we”? This interactive session invites participants to critically examine dominant fairness frameworks in AI, which are often shaped by individualistic, technocratic, and context-detached values. In doing so, we ask: Whose fairness are we coding for? To expand the conversation, we turn to Indigenous ways of knowing, which offer alternative ways of understanding fairness: not as abstract metrics, but as part of a lived, collective responsibility. How might Indigenous concepts of reciprocity, responsibility, and collective well-being help us address the harms caused by AI when it is misused or poorly governed? By engaging with real-world cases, participants will explore these questions and co-create insights that not only broaden the discourse on fairness in AI, but also push for systems that are more culturally grounded, socially accountable, and ethically sound. In essence, this session invites participants to redefine algorithmic fairness beyond metrics.
Write Your Own Standard (WYOS). A mock standardization process on human oversight (AI Act, Article 14)
Laura Muntjewerf, Willy Tadema
How are harmonized standards of the AI Act developed? How can you, as a researcher, contribute to them with your knowledge? This interactive workshop gives a hands-on introduction to the process of standardization. We simulate the drafting of a harmonized standard based on Article 14 of the EU AI Act (Human Oversight). The participants will explore how to turn legal text into testable technical specifications, while upholding the principles of consensus-building and considering practical implementability.
Participants will leave with a clearer understanding of the standardization process in the context of the AI Act, its significance in the AI regulatory landscape, the underlying factors driving the complexity and time-consuming process of AI standardization and how academic and professional expertise can contribute to development of harmonized standards.
Designing a ‘fair’ human-in-the-loop
Isabella Banks, Jacqueline Kernahan
This interactive workshop is an opportunity to reflect on what constitutes a ‘fair’ and ‘effective’ automated decision support system (ADSS) – and the role of the human-in-loop (HITL) – from different stakeholder perspectives. Through a comparative analysis of case studies in the medical and welfare domains, participants will be invited to think together about what it would mean for the users (HITLs) of two systems to behave ethically and optimally. The session will conclude with a discussion about what HITLs can (and cannot) be expected to achieve in practice and whose knowledge and experiences are (and should be) reflected in the way ADSS are designed, used, and overseen.
Beyond the Buzzwords: Co-Creating Accountability through Legal and Technological Perspectives on Algorithmic Transparency
Maria Lorena Flórez Rojas
Buzzwords around AI—like transparency, accountability, and fairness—surround both research and practice, often treated as isolated principles. Yet in reality, these concepts are deeply interconnected, and separating them risks weakening their impact. This interactive session invites participants to move past surface-level discussions and critically engage with how transparency and accountability operate in practice—particularly in public sector contexts where AI is deployed through systems that are procured rather than built (e.g. SaaS models). Through a combination of lightning talks and a hands-on game, The Accountability Puzzle, participants will collaborate in small groups to reconstruct accountability chains across the AI lifecycle. Scenarios will simulate real-life constraints such as documentation gaps, proprietary barriers, and legal ambiguity.
Fairness for whom? On the need for co-creative pathways in AI policy and research
Ilina Georgieva, Paul Verhagen, Courtney Ford
By advocating for co-creation pathways, this panel discussion draws attention to the predominant practices of observational fairness in AI policy and research, which answer the question of ‘Fairness for whom?’ by evaluating AI impact from a distance. Our session provides evidence on the necessity for meaningful community participation by offering insights from two diverse use cases. The use cases point to some common challenges of participatory/collaborative work in and for AI, and offer guidelines on how to overcome them. They further show the need for systemic change in how we think about individual and community self-determination, and the policy and research tools that cater to them in the context of AI. The session addresses gaps in the translation of participatory theories to AI research, policy and practice, and aims to foster transdisciplinary dialogue between the different stakeholders in those communities.
"My documents, check them out" – a serious game about migration infrastructures
Lorenzo Olivieri
"My documents, check them out" is a collaborative, role-playing game simulating the bureaucratic mechanisms shaping migration control. Its main goal is to interest players in the problematization and re-design of the semantic categories and values used in databases for migration management. During the game, players impersonate fictional characters and they must create, from scratch, an application form containing information about their characters and their migration journey. Applications are then uploaded into a software which will take a decision about them. My documents, check them out was developed in the context of the Processing Citizenship ERC project (#714463) and was tested with asylum seekers, lawyers, students, civil servants.
From the other side – be the regulator and enforce fairness in algorithms
Uri Shimron, Alany Reyes Pichardo, Marie Beth van Egmond, Christiaan Duijst
Analysing an algorithm as a regulator isn’t quite as straightforward as it seems. To show this, the Dutch DPA (Data Protection Authority) will present a hypothetical case of an algorithm that might be unfair. But is it? Is it OK for the company to use this algorithm, or should they be fined? What do you think? It will be an interactive session where academia and regulation will intersect. Be aware that user participation is mandatory!
Data Access under the Digital Services Act
Emilie Sundorph, Joao Vinagre, Sophia Dietrich
The Digital Services Act introduces a set of common rules, directly applicable in all Member States of the EU, under which online platforms and search engines operate in the EU. Given their size and potential societal impact, Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs), i.e. the ones with more than 45 million active recipients - ~10% of the EU population -, have additional obligations. These are related to the systemic risks identified in the DSA, that fall under 4 categories: 1) The dissemination of illegal content; 2) Risks to the exercise of fundamental rights (as defined in the EU Charter of Fundamental Rights); 3) Negative effects on civic discourse and electoral processes, as well as public security; 4) Negative effects related to gender-based violence, protection of minors and public health, and negative effects to physical or mental well-being of individuals.
Given the complexity of such issues, and the lack of consolidated scientific knowledge about the impacts of VLOPs and VLOSEs on society, the EU will rely on researchers to study the detection, identification and understanding of such risks. With this, a wide range of research questions that, given the unavailability of data, have been only superficially covered, can now be addressed in full. However, obtaining access to platform data requires researchers to (1) clearly connect their research questions to relevant systemic risks and (2) satisfy a strict set of legal and technical requirements. This tutorial will address these questions.
During the session we will introduce the DSA; explain how VLOP and VLOSE data can be obtained through DSA transparency mechanisms, including an introduction to the concept of ‘systemic risks’; and give participants the opportunity to brainstorm ideas for how data access may benefit their research, now or in the future.
Practices of resistance against algorithmic risk scoring and automated decision-making in welfare
Doris Allhutter, Karolina Sztandar-Sztanderska
[Note: this is a closed, by-invitation-only workshop]
This workshop centres on the experiences and strategies of civil rights organizations, advocacy groups, data journalists and researchers contesting algorithmic risk scoring and automated decision-making for privacy violations and non-transparent and biased outcomes. Efforts to reconfigure the welfare state in times of increasing austerity involve different normative claims, depending on whose interests the actors have in mind. This is also reflected in the values and norms that are mobilised in the deployment, development, use and contestation of 'intelligent' systems and data-driven decision-making in the administration of welfare. Governments, public agencies, caseworkers, system developers, advocacy groups, civil rights organisations and data journalists argue for goals that all seem to serve citizens: the promotion of efficient administration, the effective and/or lawful delivery of welfare, or the equitable realisation of social rights.
We invite civil rights advocates, representatives of advocacy groups, data journalists and researchers to have an exchange on the multi-layered values, expectations and normative assumptions that motivate their resistance and claim to co-shape digital welfare provision. Using the deconstructive method of mind scripting, participants will collectively explore practices of resistance and the ambiguities of normative claims.