Back to the Roots: Interpretable Machine Learning in Communication Research
- PhD project
This doctoral project investigates how interpretable machine learning can advance communication science — from the transparent identification of protest-related news and the recognition of AI-generated images to the analysis of political TikTok communication in the 2024 European election campaign.
Machine learning is increasingly being used in communication studies, especially for tasks that previously could only be performed by humans. This often involves processes that are – like e.g. large language models – referred to as “black boxes” due to their complex structure and the resulting lack of decision transparency. Such methods can be used to great benefit in many areas, but they also have some disadvantages, from high resource consumption and difficult-to-detect biases to unused opportunities to gain insights into the research subject through an analysis of the model structure.
My dissertation, with the working title “Methods of Interpretable Machine Learning in the Empirical Research Process”, aims to demonstrate this problem as well as various alternatives using three examples, all of which are highly relevant in the field of political communication and at the same time take different modalities into account. In the first example, various interpretable models are used to perform binary text classification of local news articles in order to identify protest-related articles. This is relevant as a transparent approach to automating an otherwise very time-consuming and costly activity and offers the opportunity to examine the patterns that a model constructs based on given binary labels – and to what extent theoretical concepts of protest (news) research are reflected in those patterns.
The second part is also a binary classification task, but for the recognition of AI-generated images (AIGI). This problem is becoming increasingly important, for the detection of deepfakes, for example, but also for the creation of a tool that can be used generally in research contexts that require reliable automated AIGI recognition. This task has its own challenges in the already difficult interpretation of machine-based image classification decisions and the fact that automated AIGI detection is difficult even with black-box methods.
In the third project, interpretable ML methods are used to analyze TikTok videos of candidates of right-wing populist and green parties in Germany, France, and Sweden during the 2024 elsection campaign for the European Parliament.
