Christian Katzenbach on ChatGPT and rule-based control of AI
30. November 2025
On the occasion of ChatGPT’s third anniversary, ZeMKI member Christian Katzenbach explains how AI can be controlled with a rule-based system.
ChatGPT in everyday life: shifting usage habits
Since ChatGPT was launched two years ago, chatbots have rapidly become part of everyday media use. Many of us use these services daily for a variety of personal and professional matters. This has also led to a shift in usage habits: information is increasingly being sought in chatbots rather than search engines; interactions are simulated in chat rather than initiated on social media.
The power of tech companies: who decides on content?
This shift also raises a new question: who actually decides on content and its relevance? How do AI models deal with controversial topics, how do they reproduce and spread misinformation? And how much are they influenced by the political views of their founders and developers? Unlike social media and search engines, AI models generate only one reality, only one answer—and it is difficult for users to determine how reliable that answer is. And it is virtually impossible for researchers to systematically collect data on possible misinformation and bias. The existing lack of transparency in the field of digital media continues to grow. And with it, the power of tech companies.
Tech companies respond to political developments and public criticism
At the same time, we know from social media that platforms respond to political developments, public criticism, and regulatory pressure. So society and politics are not powerless. In the 2010s, Facebook, Twitter, and YouTube repeatedly changed their content rules and established content moderation structures in response to criticism of the spread of misinformation and hate speech.
The current backlash requires robust (EU) regulation
With Musk’s takeover of Twitter, Trump’s second term in office, and the ominous alliance between US tech companies and the US government, a powerful reversal is currently underway, turning the backlash into a techlash. From this perspective, content moderation is (once again) seen as censorship, and fact-checking is being discontinued. For companies, this is not only an ideological issue, but above all an economic one: making content moderation truly systematic and responsible on a large scale and globally is very expensive. This means that the implementation of the new EU digital regulations comes at a time when they are needed more than ever – and more than was probably anticipated during the legislative process. We are now seeing a confrontation between EU and US players, in which regulations that are clearly oriented toward the common good must prove themselves against the backdrop of new geopolitical conflicts.
