For the Zoom link, please send an email to firstname.lastname@example.org.
In the past few years, many governments and supranational organisations published strategy papers in which they present their visions of the future development, application and regulation of AI. Nation states articulate the opportunity (and urgency) to shape the future of AI according to their political cultures, values, traditions and national pathways. Moreover, they portray themselves as participants in a global AI race, competing over economic and geopolitical power. With its National Strategy, launched in 2018, the German Federal Government presented its visions under the title "AI made in Germany".
Most of the scholarship on national AI strategies has approached the subject by conducting (comparative) assessments, evaluating from an ethical perspective or based on criteria of competitiveness, and providing policy recommendations accordingly. However, AI strategies not only define policy agendas and determine rules and measures to push ahead or restrict the integration of AI in society. They articulate future visions of these societies based on the promises and fears associated with AI. They connect the appeal of national S&T projects to the purpose of a state that provides for the flourishing of the nation’s innovative abilities and the (economic) well-being of its citizens. By articulating visions of how technoscientific promises can meet national interests and the common good, they not only legitimize governmental measures that aim to fulfill these promises but also (re)imagine and (re)perform liberal statehood in a technological society.
In order to unfold this amalgamation of technoscientific promises, techno-fears, imagined futures and concepts of statehood, this talk will present an analysis of certain key aspects of the vision of an “AI made in Germany” such as human-centric AI, technological sovereignty, ethics by design and criticality.
Jens Hälterlein is coordinator of the interdisciplinary research project “Meaningful Human Control. Autonomous Weapon Systems between Regulation and Reflection“. His current research addresses the relation between national imaginaries of security and statehood on the one hand and the R&D of AI-based technologies on the other hand. He has worked in numerous projects focussing on the societal implications of the use of technologies in contexts such as policing and crisis response. On a conceptual level, he has been particularly interested in the performativity of security technologies and the ways in which they contribute to processes of social control, social sorting, and subjectivation. For more information: https://zenmem.academia.edu/JensHälterlein or follow him @JensHalterlein.
Zazie van Dorp works for PEPT as a student assistant. She studies Philosophy and Law at the University of Amsterdam, where she focuses on the ethics and regulation of technology. Within these fields she is especially interested in questions of agency, responsibility and solidarity raised by the introduction of health and lifestyle apps in society.