The Critical AI Seminar series continues in 2025 and 2026 with another four lectures that critically address Artificial Intelligence (AI) from various perspectives – across different contexts of application and through different lenses of critique. With these lectures we hope to once again bring together scholars from around the world in engaging discussions and further contribute to Critical AI Studies as a continuing ‘field in formation’ (Raley and Rhee, 2023).
The seminars are online, open to everyone. For each seminar, one or two prominent invited speaker(s) are invited to give a talk that engages theoretically or empirically with AI.
The seminar series is organised by Anna Schjøtt Hansen, Dieuwertje Luitse and Tobias Blanke, who are part of the Critical Data & AI Research Group at the University of Amsterdam. It is supported by the University of Amsterdam’s Research Priority area Human(e) AI, the ERC Advanced Grant-funded Deep Culture Project, and the Amsterdam School for Cultural Analysis, and is hosted by Creative Amsterdam (CREATE).
The Critical AI Seminar series will continue in 2025 and 2026 with another four lectures that critically address Artificial Intelligence (AI) from various perspectives – across different contexts of application and through different lenses of critique. With these lectures we hope to once again bring together scholars from around the world in engaging discussions and further contribute to Critical AI Studies as a continuing ‘field in formation’ (Raley and Rhee, 2023).
The seminars are online, open to everyone. For each seminar, one or two prominent invited speaker(s) are invited to give a talk that engages theoretically or empirically with AI.
The seminar series is organised by Anna Schjøtt Hansen, Dieuwertje Luitse and Tobias Blanke, who are part of the Critical Data & AI Research Group at the University of Amsterdam. It is supported by the University of Amsterdam’s Research Priority area Human(e) AI, the Deep Culture Project, and the Amsterdam School for Cultural Analysis, and is hosted by Creative Amsterdam (CREATE).
Upcoming seminars
Registration here
One of the strengths of Critical AI studies has been a rapid development of methods for addressing the different social and political objects that encompass AI. We now have outstanding studies of datasets, material infrastructures, ecologies, histories, and the political economy of platforms. Rather than naively “reading” model outputs, they account for their conditions of possibility. These studies cut through words—ideologies ranging from hype to doom—to grasp the interplay of interests, materiality, and power that constitute AI. In this talk, we will reflect on the characteristics of this literature, its distinctive tropes, style, and conventions. We then propose critical reading strategies for scholars in the interpretive social sciences and humanities, who, in their own way, face the problem of reading texts for whom they are not the intended audience.
Louise Amoore is Professor of political geography and Director of the Leverhulme Centre for Algorithmic Life, Durham University. Her work addresses the politics of machine learning algorithms and the epistemologies of contemporary AI. She is the author of Cloud Ethics: Algorithms and the Attributes of Ourselves and Others (2020) and The Politics of Possibility (2013), both with Duke University Press. Louise is Fellow of the British Academy.
Alexander Campolo is a postdoctoral researcher on the “Algorithmic Societies” project in the Department of Geography at Durham University. His work draws from the history of science and technology and social theory to explore the epistemological and political implications of machine learning. He received his PhD from New York University and has previously worked as a postdoctoral fellow at the Institute on the Formation of Knowledge at the University of Chicago and the AI Now Institute.
January 13, 5:30-7 PM (CEST): Invited talk by Fabian Offert on ‘Vector Media‘
Registration here
This talk presents a new history and theory of the vector space in contemporary artificial intelligence systems. I will argue that the inevitable bias of such systems lies not only in what they represent, but in the logic of representation itself. Their internal ideologies are often not directly visible in their generated outputs or even their training data, the focus of almost all existing work. Instead, they emerge from how the model organizes and transforms information within itself. While previous media technologies created new formats or imitated existing ones, deep neural networks instead seek to dissolve prior media into a universal space of commensurability: the vector space. Cultural objects, once specific to a medium, are rendered fungible; commodities in a new neural economy, expressed only in terms of their neural exchange value.
Fabian Offert is Assistant Professor for the History and Theory of the Digital Humanities and Director of the Center for the Humanities and Machine Learning (HUML) at the University of California, Santa Barbara. His research focuses on the epistemology, aesthetics, and politics of artificial intelligence. His most recent book project, Vector Media (Meson Press/University of Minnesota Press) writes a new historical epistemology of artificial intelligence, asking how machine learning models represent culture and what is at stake when they do. Before joining the faculty at UCSB, Fabian was Postdoctoral Researcher in the German Research Foundation’s Priority Program ‘The Digital Image’ and Assistant Curator at ZKM | Center for Art and Media Karlsruhe.
Registration here
This talk develops the concept of “evaluation ecologies” to theorize how machine learning (ML) systems are assessed in public sector contexts. Through a conceptual analysis supported by case studies of ML deployments in Danish higher secondary education and Dutch psychiatric clinics, we demonstrate how evaluation practices extend beyond technical assessment to encompass complex negotiations of power, expertise, and accountability. Drawing on theoretical perspectives from Science and Technology Studies (STS) and building upon Halpern and Mitchell’s (2023) work on experimental governance and Amoore’s (2020) analysis of cloud ethics, we advance “evaluation ecologies” as a framework for understanding how ML assessments unfold through multiple, often contradictory registers.
Helene Friis Ratner is a full professor of organization studies and technology at The Technical University of Denmark, DTU Management. Combining science and technology studies with organization studies, she researches how digital data infrastructures, data visualizations, and AI applications transform welfare organizations. Her research is published in journals such as Big Data & Society, AI & Society, and Organization Studies. She is currently co-PI of the Algorithms, Data and Democracy project (VELUX foundations) and REPAI – Responsible AI for Value Creation (Grundfos) as well as chief scientist in Denmarks National Centre for AI in Society (CAISA).
Nanna Bonde Thylstrup is an Associate Professor on the Promotion Programme in Modern and Digital Culture. She is PI of Data Loss: The Politics of Disappearance, Destruction and Dispossession in Digital Societies (DALOSS) funded by the European Research Council. The project is premised on the idea that datafication is inherently conditioned by loss, and that this loss can also be generative. Rather than framing loss retroactively as something that can be ‘fixed, patched or recovered’, then, DALOSS investigates loss as actively constituted through social, political, and aesthetic relations.
May 20, 12-1:30 PM (CEST): Invited talk by Thao Phan on ‘Testing-in-the-wild’
Registration here
This presentation analyses the phenomenon of the AI testbed and practices of “testing-in-the-wild.” It combines historical and sociological approaches to understand how places like Australia have come to be treated as ideal test sites for new AI systems, using commercial drone delivery company Wing Aviation as a case study. It connects the figuration of Australia as a contemporary testbed with histories of the nation as a colonial experiment. I argue that this historical frame has been consistently deployed to justify the treatment of lands and peoples as experimental subjects across a range of domains: techniques of penal management in the nineteenth century, military weapons in the early twentieth century, and AI-driven systems like drone delivery in the twenty-first century. By connecting this history to the present moment, I show how Australia has been variously treated as a test site and Australians as test subjects based on changing imaginaries of the nation and its people, from proxies for whiteness and Empire in the colonial period, to multiculturalism and ethnic diversity in the contemporary era.
Thao Phan is a feminist science and technology studies (STS) researcher who specialises in the study of gender and race in algorithmic culture. She is a Lecturer in Sociology (STS) at the Research School for Social Sciences at the Australian National University (ANU). Thao has published on topics including whiteness and the aesthetics of AI, big-data-driven techniques of racial classification, and the commercial capture of AI ethics research. She is an elected Council Member of the Society for Social Studies of Science (4S), a member of the Australian Academy of Science’s National Committee for the History and Philosophy of Science, and is the co-founder and current President of AusSTS—Australia’s largest network of STS scholars.