
Re-evaluating Governing Principles: Navigating the Integration of Generative AI in Everyday Communication Systems
- Automation and Datafication of Communication
- Aktiv
- Research project
- Duration: 2024 – 2027
- Project lead: Dr. Rebecca Scharlach
Funded by: Zentrale Forschungsförderung (ZF), Universität Bremen
Elevator Pitch
This postdoctoral project examines the socio-technical impact of generative AI integration into communication ecosystems. Through policy analysis, expert interviews, and user surveys, I will reassess how platform values evolve amid ongoing technological and regulatory changes.
Project Overview
This research investigates the principles shaping generative AI regulation and how AI companies adapt to technological and policy shifts. By analyzing policy developments, interviewing experts, and conducting focus groups, I aim to explore user perspectives on platform values and their evolving understanding of generative AI. The project seeks to contribute to the broader discourse on the interplay between emerging technologies, platform governance, and shifts in societal values.
The rapid proliferation of generative AI tools—such as chatbots and image-generation software—has positioned major technology companies as key actors in shaping algorithmic society. Amid high-profile controversies and regulatory shifts, particularly in the European Union, AI companies play a crucial role in determining the future landscape of digital communication. While AI-driven recommendations can enhance user engagement and content moderation (Gillespie, 2018), concerns persist regarding discrimination (Sandvig et al., 2014), disinformation (Carlson, 2020; Mourão et al., 2019), harm to vulnerable groups (DeCook et al., 2022; Gillett et al., 2022), and addictive behaviors (Bhargava & Velasquez, 2021). Research on generative AI is still emerging, yet its influence on communication and culture is already raising critical questions (e.g., Hepp et al., 2023). Large Language Models (LLMs) like GPT-4 (OpenAI, 2023) and audiovisual content generators such as Midjourney and DALL-E (Ramesh et al., 2021) have made generative AI widely accessible, but their impact on content production and platform governance remains uncertain.
Empirical studies have shown that major tech platforms strategically deploy values to direct content production (Hallinan, 2023), justify moderation practices (Gillespie, 2018), redefine social concepts like hate speech (Siapera & Viejo-Otero, 2021), and shift regulatory responsibilities (Scharlach et al., 2023; Scharlach & Hallinan, 2023). Platform values, which I define as the underlying principles expressed through social media governance (Scharlach, 2024), are often leveraged to mitigate conflicts and regulatory scrutiny. However, with rapid technological advancements and new regulatory measures (e.g., Katzenbach, 2021), platform governance is at a pivotal moment. The integration of generative AI introduces new challenges, including regulatory compliance, ethical considerations, and the broader implications of AI-driven content moderation (Floridi, 2023; Helberger, 2023; Katzenbach, 2021; Hendrix, 2024).
Recent developments highlight the evolving regulatory landscape. Platforms such as TikTok, YouTube, and Meta have introduced requirements for creators to label AI-generated content as “synthetic content” (Whitney, 2024). Additionally, efforts are underway to establish industry standards and computational methods for detecting AI-generated media (Clegg, 2024; TikTok, 2019). However, these measures are in their infancy and raise concerns about potential shortcomings and unintended consequences. Given these ongoing transformations, a critical examination of the role of values in regulating and developing generative AI tools is essential.
Research Questions
This project seeks to address the following key questions:
- How do major tech companies adapt their value systems in response to technological and regulatory shifts?
- What principles currently guide—and should guide—the integration and regulation of generative AI in communication ecosystems?
- How can platform values inform current and future AI governance practices?
To answer these questions, I will survey three main realms: policies concerning the integration of generative AI tools, industry and policy experts, and everyday users. Investigating these realms will allow me to understand the opinions and expectations of different stakeholders and help identify key priorities for the future of tech regulation.
Dissemination of Findings
Preliminary results will be presented at the AlgoSoc conference in Amsterdam, NL, and at the ICA conference in Denver, USA.
References:
Bhargava, V. R., & Velasquez, M. (2021). Ethics of the Attention Economy: The Problem of Social Media Addiction. Business Ethics Quarterly, 31(3), 321–359. https://doi.org/10.1017/beq.2020.32
Burgess, J. (2021). Platform Studies. In S. Cunningham & D. Craig (Eds.), Creator Culture: An Introduction to Global Social Media (pp. 21–38). New York University Press. https://doi.org/10.18574/nyu/9781479890118.003.0005
Carlson, M. (2020). Fake news as an informational moral panic: The symbolic deviancy of social media during the 2016 US presidential election. Information, Communication & Society, 23(3), 374–388. https://doi.org/10.1080/1369118X.2018.1505934
Clegg, N. (2024, February 6). Labeling AI-Generated Images on Facebook, Instagram and Threads. Meta. https://about.fb.com/news/2024/02/labeling-ai-generated-images-on-facebook-instagram-and-threads/
DeCook, J. R., Cotter, K., Kanthawala, S., & Foyle, K. (2022). Safe from “harm”: The governance of violence by platforms. Policy & Internet, 14(1), 63–78. https://doi.org/10.1002/poi3.290
Floridi, L. (2023). The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford University Press.
Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press.
Gillespie, T., Aufderheide, P., Carmi, E., Gerrard, Y., Gorwa, R., Matamoros-Fernández, A., Roberts, S. T., Sinnreich, A., & Myers West, S. (2020). Expanding the debate about content moderation: Scholarly research agendas for the coming policy debates. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1512
Gillett, R., Stardust, Z., & Burgess, J. (2022). Safety for Whom? Investigating How Platforms Frame and Perform Safety and Harm Interventions. Social Media + Society, 8(4), 20563051221144315. https://doi.org/10.1177/20563051221144315
Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854–871. https://doi.org/10.1080/1369118X.2019.1573914
Hallinan, B. (2023). No judgment: Value optimization and the reinvention of reviewing on YouTube. Journal of Computer-Mediated Communication, 28(5), zmad034. https://doi.org/10.1093/jcmc/zmad034
Hallinan, B., & Brubaker, J. R. (2021). Living With Everyday Evaluations on Social Media Platforms. International Journal of Communication, 15(0).
Hallinan, B., Scharlach, R., & Shifman, L. (2021). Beyond Neutrality: Conceptualizing Platform Values. Communication Theory.
Helberger, N. (2023, July 18). Generative AI in media & journalism: Think big, but read the small print first. Medium. https://generative-ai-newsroom.com/generative-ai-in-media-journalism-think-big-but-read-the-small-print-first-375f2ecb1256
Hendrix, J. (2024, January 28). How To Assess AI Governance Tools | TechPolicy.Press. Tech Policy Press. https://techpolicy.press/how-to-assess-ai-governance-tools
Hepp, A., Loosen, W., Dreyer, S., Jarke, J., Kannengießer, S., Katzenbach, C., Malaka, R., Pfadenhauer, M., Puschmann, C., & Schulz, W. (2023). ChatGPT, LaMDA, and the Hype Around Communicative AI: The Automation of Communication as a Field of Research in Media and Communication Studies. Human-Machine Communication, 6(1). https://doi.org/10.30658/hmc.6.4
Katzenbach, C. (2021). “AI will fix this” – The Technical, Discursive, and Political Turn to AI in Governing Communication. Big Data & Society, 8(2), 205395172110461. https://doi.org/10.1177/20539517211046182
Mourão, R. R., Robertson, C. T., & Robertson, C. T. (2019). Fake News as Discursive Integration: An Analysis of Sites That Publish False , Misleading , Hyperpartisan and Sensational Information Fake News as Discursive Integration: An Analysis of Sites That Publish False , Misleading , Hyperpartisan and Sensation. Journalism Studies, 0(0), 1–19. https://doi.org/10.1080/1461670X.2019.1566871
Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms. Data and Discrimination: Converting Critical Concerns into Productive Inquiry. International Communication Association, Seattle, WA.
Scharlach, R., & Hallinan, B. (2023). The value affordances of social media engagement features. Journal of Computer-Mediated Communication, 28(6). https://doi.org/10.1093/jcmc/zmad040
Scharlach, R., Hallinan, B., & Shifman, L. (2023). Governing principles: Articulating values in social media platform policies. New Media & Society. https://doi.org/10.1177/14614448231156580
Scharlach, R. (2024). How to Spark Joy: Strategies of Depoliticization in Platform’s Corporate Social Initiatives. Social Media + Society, 10(3). https://doi.org/10.1177/20563051241277601
Siapera, E., & Viejo-Otero, P. (2021). Governing Hate: Facebook and Digital Racism. Television & New Media, 22(2), 112–130. https://doi.org/10.1177/1527476420982232
TikTok. (2019, August 16). New labels for disclosing AI-generated content. Newsroom | TikTok. https://newsroom.tiktok.com/en-us/new-labels-for-disclosing-ai-generated-content
Whitney, L. (2024). YouTube creators will now have to label certain AI-generated videos upon upload | ZDNET. https://www.zdnet.com/article/youtube-creators-will-now-have-to-label-certain-ai-generated-videos-upon-upload/