Shaping 21st Century AI – Controversies and Closure in Media, Policy and Research
- Automation and Datafication of Communication
- Completed
- Research project
- Duration: 2021 – 2024
- Project lead: Prof. Dr. Christian Katzenbach
Team: Prof. Dr. Christian Katzenbach, Dr. Anna Jobin, Laura Liebig, Licina Güttel
Partners: Medialab at Sciences Po, Paris, Centre for Interdisciplinary Methods (CIM) at the University of Warwick, and the NENIC Lab at INRS Montreal, as well as the Algorithmic Media Observatory at Concordia University
Funding: Open Research Area (ORA) funding line from DFG, ANR, ESRC, SSHRC
“Artificial intelligence” (AI) is currently in a phase of social establishment. Politicians, experts and start-up founders tell us that AI will fundamentally change how we live, communicate, work and travel tomorrow. Autonomous vehicles, disease detection, energy and climate protection, automatic filtering of misinformation and hate speech – AI is set to solve the major problems of our time. At the same time, however, it is also becoming clear that increasing automation could increase social and economic inequality, exacerbate the opacity of decision-making processes and ultimately call human autonomy into question. The further development of the technology is also controversial in the scientific field itself, with different paths and approaches vying for importance and resources; after all, AI was not always synonymous with deep learning, as seems to be taken for granted today.
This combination of dynamic technological developments, fundamental controversies and massive investments became the starting point for the “Shaping AI” project. This multinational collaboration of partners in Germany, France, the UK and Canada conducted a comparative longitudinal study of how AI is being integrated into our societies as a socio-technical institution.
The project employed a range of methods, including historical, ethnographic and computational methods, as well as the Media Lab’s cartografie de controverses, to examine the discourse and developments around the AI ‘deep learning’ revolution over the ten formative years from 2012 to 2021 in the four partner countries and in three key areas. Media analysis examined AI debates in major news media, niche websites and social media conversations. Policy analysis mapped and analyzed existing policy initiatives, white papers and regulatory approaches in each country. The research analysis captured publications, co-citations and the development of sub-disciplines and sub-communities in the research field. Quantitative methods as well as ethnographic participation in relevant workshops and conferences were used for this purpose. In addition, the project investigated and initiated public engagement formats, including workshops that enabled stakeholders and members of the public to debate and negotiate AI pathways.
This research design enabled the project to understand how AI is currently institutionalized and at the same time it contributed to its further shaping. AI is a very vague concept and therefore remains socially and technically highly open to development. AI could also be considered differently – or even completely unacceptable in certain areas. In this international three-year project Shaping AI, we uncovered the creeping institutionalization of AI beyond the hype. On this basis, we as a society can better ensure the development of AI for the common good.
The project builded on longer-term work at the HIIG on the discursive and political construction of AI and the development of the digital society.
Further information can be found here.
Publications and other academic contributions
- Bareis, J., & Katzenbach, C. (2021). Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values, 47(5), 855–881. https://doi.org/10.1177/01622439211030007
- Bory, P., Natale, S., & Katzenbach, C. (2024). Strong and weak AI narratives: An analytical framework. AI & SOCIETY. https://doi.org/10.1007/s00146-024-02087-8
- Hepp, A., Loosen, W., Dreyer, S., Jarke, J., Kannengießer, S., Katzenbach, C., Malaka, R., Pfadenhauer, M. P., Puschmann, C., & Schulz, W. (2023). ChatGPT, LaMDA, and the Hype Around Communicative AI: The Automation of Communication as a Field of Research in Media and Communication Studies. Human-Machine Communication, 6, 41–63. https://doi.org/10.30658/hmc.6.4
- Hepp, A., Loosen, W., Dreyer, S., Jarke, J., Kannengießer, S., Katzenbach, C., Malaka, R., Pfadenhauer, M., Puschmann, C., & Schulz, W. (2022). Von der Mensch-Maschine-Interaktion zur kommunikativen KI. Publizistik, 67(4), 449–474. https://doi.org/10.1007/s11616-022-00758-4
- Jobin, A., & Katzenbach, C. (2023). The becoming of AI: A critical perspective on the contingent formation of AI. In Handbook of Critical Studies of Artificial Intelligence (pp. 43–55). Edward Elgar Publishing. https://www.elgaronline.com/edcollchap/book/9781803928562/book-part-9781803928562-9.xml
- Jobin, A., Liebig, L., & Katzenbach, C. (2022, May 30). Communicating AI Policy: How Technology Comes to Matter in Media. 72nd Annual ICA Conference, Paris.
- Katzenbach, C. (2021). “AI will fix this” – The Technical, Discursive, and Political Turn to AI in Governing Communication. Big Data & Society, 8(2). https://doi.org/10.1177/20539517211046182
- Katzenbach, C. (2022a). Der „Algorithmic turn“ in der Plattform-Governance. KZfSS Kölner Zeitschrift für Soziologie und Sozialpsychologie, 74(1), 283–305. https://doi.org/10.1007/s11577-022-00837-4
- Katzenbach, C. (2022b, June 16). Lecture “KI-Widersprüchlichkeiten in Policy und Wissenschaft“—Workshop „Kommunikation im digitalen Wandel“ [Invited Lecture]. Zentrum Informationsarbeit der Bundeswehr in Kooperation mit der Universität Trier, Berlin.
- Katzenbach, C. (2023, June 14). Mehr als Technologie: Die kommunikative und politische Konstruktion von KI. Konferenz der Plattform Lernende Systeme 2023, acatech – Akademie der Wissenschaften, Berlin. https://www.plattform-lernende-systeme.de/plattformkonferenz-juni-2023.html
- Katzenbach, C., & Pentzold, C. (2024). Automating communication in the digital society Editorial to the special issue. New Media & Society, 26(9), 4925–4937. https://doi.org/10.1177/14614448241265655
- Katzenbach, C., Pentzold, C., & Otero, P. V. (2024). Smoothing Out Smart Tech’s Rough Edges: Imperfect Automation and the Human Fix. Human-Machine Communication, 7(1). https://doi.org/10.30658/hmc.7.2
- Katzenbach, C., Richter, V., Jobin, A., & Liebig, L. (2022, May 12). Shaping AI – Imaginaries and Controversies of AI in Media and Policy. Artificial Intelligence and the Human: Cross-Cultural Perspectives on Science and Fiction, Japanese-German Center Berlin (JDZB).
- Liebig, L. (2023, May 20). Agenda-Setting und Discursive Power im deutschen KI-Diskurs [68. Jahrestagung der Deutschen Gesellschaft für Publizistik- und Kommunikationswissenschaft]. Aneignung, Folgen und Wirkungen automatisierter Medienkommunikation II, Universität Bremen.
- Liebig, L., Güttel, L., Jobin, A., & Katzenbach, C. (2022). Subnational AI policy: Shaping AI in a multi-level governance system. AI & Society. https://doi.org/10.1007/s00146-022-01561-5
- Liebig, L., Güttel, L., Jobin, A., & Katzenbach, C. (2024). Situating AI policy: Controversies Covered and the Normalisation of AI. Big Data & Society. https://doi.org/10.1177/20539517241299725
- Liebig, L., Jobin, A., Guettel, L., & Katzenbach, C. (2022, July). AI Federalism: How Subnational Policy Tackles a ‘Global’ Technology. IAMCR – Communication Research in the Era of Neo-Globalisation: Reorientations, Challenges and Changing Contexts, Beijing.
- Pentzold, C., & Katzenbach, C. (2022, October 19). Smoothing out smart tech’s rough edges: Imperfect automation and the human fix. ECREA 2022 9th European Communication Conference, Aarhus.
- Richter, V., Katzenbach, C., Dergacheva, D., Kuznetsova, V., Brause, S. R., Schäfer, M. S., & Zeng, J. (2023, May 22). Who is Shaping AI Debates and Trajectories? Stakeholders and their Imaginaries of AI in US- and German Social and News Media. (Un)stable Diffusions Symposium, Montreal.
