Between Artificial Intelligence and Human Encounter
A PhD Workshop on AI and Democracy
By Niklas Bub
After two years of pandemic, the interdisciplinary PhD workshop ›AI and Democracy‹ was supposed to be different: far away from the pixelated faces, the ›trying-to-reconnect‹ remarks, and the anger about the people who—after two years of online-meetings—still haven’t managed to get themselves a decent microphone. While ›growing tired of online-meetings‹ would have been an accurate description of many people’s states of mind during 2021, the predominant mood of 2022 shifted towards a gloomy resignation.
Many missed the exchange in personal conversations, the forging of real connections, and the inspiration through encounters—yet this free flow of ideas is not only a foundation of democratic discourse but immensely important when it comes to discussing the future of artificial intelligence, says the workshop’s main organizer, Larissa Höfling:
»There is currently too little talk about how dominant the big tech companies are in digitizing the world, making it AI-readable. University researchers are struggling to keep up with the pace of algorithm development in the industry, with new AI applications hitting the market virtually every minute. We should, however, think much more about how we want to use these technologies.«
In order to have a profound conversation about the »how«, it is sometimes necessary to leave the fast-paced digital world. That’s why on April 2nd, a total of 23 interdisciplinary AI researchers made their way to Tübingen—the tranquil town on the edge of the Swabian Alb—to discuss the impact of AI on democracy.
Dual-use Technology
For some of the researchers, it was to be the first face-to-face congress in their scientific careers. Neele Falk, for instance, has only had one other in-person congress since she started her PhD on Natural Language Processing (NLP) more than two years ago, »and the congress at that time was by far not as exciting as this one«, she shares at one of the social evening events: »At that congress, I only had five minutes for my presentation and no real discussion ensued—just one or two boring questions from the audience. Here, on the other hand, we have 30 minutes for our presentations, and the discussions afterwards are really lively.«
Neele’s presentation is about ›using AI to investigate argumentation in deliberative discussions‹, which—in plain English—means figuring out: What is a good discourse? Or what constitutes a good argument? Two important questions for a democratic discourse that may soon be increasingly moderated by artificial intelligence. Together with Christopher Klamm, a fellow NLP researcher, she explains to the workshop participants how artificial intelligence can understand and respond to text or voice data in the same way humans do.
Both Neele and Christopher come from a technical background, where the Silicon Valley mentality of ›move fast and break things‹ still prevails, although there is more and more focus on responsible AI, Christopher says, adding that »dual-use cases can easily arise«. By dual-use, Christopher means applications that are subsequently used for a negative purpose other than the original. For instance, the use of AI to create misinformation or propaganda.
Harmless AI applications can therefore quickly become harmful in the wrong hands, says Žilvinas Švedkauskas, one of the political scientists at the workshop: »When German engineers are sent to Middle Eastern countries like Egypt to develop AI-enabled applications, which could potentially be used for censorship or surveillance of civil society actors, government-affiliated enterprises usually make sure that engineers have a good time and don’t think about the potential misuse of their algorithms too much. That is why it is of particular importance that we train these engineers beforehand to be aware of the context in which they are deploying these AI-enabled applications; so that they do not release them naively, without appropriate privacy safeguards.«
The Balancing Act
Like Žilvinas, more than half of the workshop participants have a non-technical background. It is the balancing act that the workshop organizers must manage: On the one hand, enabling a truly interdisciplinary exchange and, on the other hand, not losing the common thread. After all, the topic of AI is so broad that scientists at an interdisciplinary workshop like this would have a very hard time to even agree on a definition. That makes it all the more important, Žilvinas believes, that such exchanges take place:
»What is still missing in the present-day scholarly debate is an interdisciplinary perspective on the usage of artificial intelligence. I think that social scientists should speak more to people from computer science departments, cognitive sciences, and vice versa. Events like this are very important for bringing great minds together and furthering the democratic benefits that artificial intelligence can bring to our societies.«
Michael Geers also feels the balancing act of interdisciplinary discourse: »As a psychologist who deals with misinformation on social media, many talks here are just completely outside my scope. But«, he notes with amazement, »as soon as we find just one commonality in our research, we can communicate«. In Michael’s case, that commonality is social media. After all, it is on these platforms that a large proportion of today’s AI applications are already running: They personalize ads, suggest ›relevant‹ YouTube videos, and show you Tweets designed to increase your engagement. Engagement on social media, however, also increases when you see misinformation, because this usually leads to very emotional—and therefore strong—reactions, Michael knows.
A workshop participant who understands Michael’s concerns is Dr. Vikoria Spaiser. She researches how artificial intelligence can be used to analyze misinformation and propaganda from fossil fuel companies on social networks. In a second step, Viktoria explains, »we can then use the AI that we have trained to intervene when misinformation about the climate crisis is spread. Instead of receiving misinformation, citizens would then be informed about the impact of the climate crisis.«
Looked at this way, Viktoria’s and Michael’s projects could even complement each other in some places. In fact, the idea for joint interdisciplinary projects comes up frequently during the workshop. But it’s not just collaborations that emerge from the conversations. NLP researcher Christopher Klamm notes that »outside of my discipline, people just use different methods. And it has happened more than once that I’ve thought to myself: ›Why don’t I do it that way?‹«
»Fight Big Data with Data«
One of the people Christopher refers to is Dr. Paul C. Bauer. At the Mannheim Centre for European Social Research, Paul uses artificial intelligence to research trust; and in particular whether people interpret survey measures in the same way: »Concretely, we hand out surveys to people and afterwards ask them what they had in mind when answering.« A typical survey question reads as follows: ›Imagine meeting a total stranger for the first time. Please identify how much you would trust this person to repay a loan of one thousand dollars?‹
What sounds like a strangely contrived question in the first instance reveals its importance for democracy in the second. After all, trust—especially in strangers—is the lubricant of our democracy and, as the legal philosopher Ernst-Wolfgang Böckenförde described in his famous dictum in 1964, »a prerequisite for the liberal-secular state, which it cannot produce itself«. Surprising, however, is the precise images that arise in the minds of some of the test subjects when they answer the survey question. One participant imagined the stranger who asked him for the thousand dollar loan as ›white, male, around 60, good looking, widower‹. Well. But what has all this to do with artificial intelligence?
»AI«, Paul explains, »comes into play when classifying these open-ended responses. Because once we have trained an algorithm well enough to classify these responses appropriately, we save ourselves an immense amount of work time for the rest of the project.« Therefore, artificial intelligence is first and foremost a tool for Paul to facilitate his research. At the same time, he is also aware of the wider use of AI:
»I think what we sometimes forget when talking about AI is the wide range of applications in the weapon industry—at least, there are not many social scientists who study this topic. Nonetheless, this interdisciplinary exchange already tackles a lot of important questions. Yesterday, for example, there was someone presenting the use of AI in surveillance software. For me, that was an important insight, especially because a lot of this technology is developed in Europe.«
The ethical and political implications of artificial intelligence are not unknown to Chonlawit Sirikupt, another workshop participant. Chonlawit used AI to analyze the Thai army’s information operations on Twitter in 2020: »The Thai army«, he explains, »used Twitter to frame the political opposition as unworthy and pro-regime leaders as praiseworthy.« In Chonlawits view, AI is a tool that can be used for good and bad. »Fight big data with data«, he quips during a break. For just like Chonlawit and Viktoria, anyone who understands programming can develop their own AI, and use it to expose the propaganda of political actors or (big data) companies. Nevertheless, Chonlawit is also realistic about the use of artificial intelligence: »The more power you have, the more you can do with it. That’s why we need oversight and regulations to ensure that the technology is not abused.«
Impacts on Democracy
The PhD students at the workshop agree that the use of artificial intelligence depends heavily on the system in which it is used. Since we live in a capitalist system, AI is used on a large scale to maximize profits. In autocratic systems, AI plays an increasingly important role in monitoring the population. And in democracies? Well, this is where we have the greatest scope to use AI for good, says Dr. Viktoria Spaiser: »We have to work with actors that have the public good in mind. These are mainly democratic social movements, because they don’t work for profit and want to create positive social change.«
Viktoria’s idea sounds simple but shows foresight, because artificial intelligence will inevitably affect our democracy; the only question is: how? After a packed weekend of presentations, conversations, and discussions about the interactions of AI and democracy, Prof. Andreas Jungherr of the University of Bamberg takes a shot at an answer:
- AI impacts self-rule: AI is often used to analyze and moderate speech, especially in the digital sphere. This, however, can limit our ability of freely uttering our opinion—even if the moderating AI only has the ›best intentions‹, such as deleting harmful or offensive speech. Moreover, AI increases the power of experts and expert knowledge, which runs counter to the idea of a democracy because people have less direct power.
- AI impacts equality: Data is conservative because there is only ever data about what has happened, not what will happen. Because AI learns from this conservative data and predicts outcomes based on it, there is a risk that existing biases will be perpetuated into the future, such as the underrepresentation of women in leadership positions.
- AI impacts elections: Since Cambridge Analytica, the term ›personalized political advertising‹ is no longer a foreign concept to most: Political movements advertise with targeted clips on social media, which preferably are only seen by those for whom they have an effect. Another danger for elections, however, is that artificial intelligence could get better and better at predicting election winners—which could become a self-fulfilling prophecy because we trust the predictions.
- AI impacts competitiveness with autocracies: After the cold war, it was often said that (Western) democracies had an information advantage over (Eastern) autocracies. AI could shift this information advantage in favor of autocracies, as they can work much more smoothly with large technology companies and worry less about privacy.
Against this background, Žilvinas Švedkauskas’ critical attitude towards AI is not surprising. In his view, »artificial intelligence is an object of technopolitical clash between democratic countries and autocratic countries vying for global dominance. Therefore«, he says, »the position of AI-enabled technologies is still to be determined by all of us: scholars, politicians, and activists.« In a democracy, however, these decisions should be made not only by experts but by the people. It is therefore crucial to educate society about the (dual) use of AI and the associated risks it poses to democratic integrity and human rights. But how exactly do you convey this knowledge to the public?
Raising Awareness
Workshop organizers Larissa Höfling and Ilja Mirsky asked themselves this question long before the workshop even began. Together with Johannes Freyer, the managing director of the event centre Westspitze in Tübingen, they decided to host a public panel discussion of the workshop’s keynote speakers, followed by a World Café—a discussion format where interested citizens from Tübingen could debate the topic of AI with the workshop participants. The following three questions were raised:
- Realistic: Where do AI applications influence us in our political decisions?
- Ethical: Where should we limit the influence of AI on political processes?
- Utopian: How can AI applications influence democracies in the long term?
Surprising for Johannes is the lively participation in the World Cafe: »Two years ago, people would have thrown tomatoes at the discussants here.« According to him, the diffuse fear of artificial intelligence has become even stronger since the Cyber Valley—Europe’s largest AI research consortium—was established in Tübingen. »That is why public discussions like this are so important«, says Johannes, »to ensure that this fear is used constructively; for example, to regulate AI.«
Political regulation can in fact be an immensely beneficial driver for the AI industry, says Prof. Ulrike von Luxburg, one of the panelists: »The new policy requirement that artificial intelligence must be able to explain why it makes its predictions the way it does has led to a whole new branch of research, ›explainable AI‹.«
Most World Café participants are students who, in one way or another, have something to do with AI themselves. But there are also a few older guests who are »positively surprised at how thoroughly young people deal with this topic«. »After the World Café«, one guest remarks, »I have a less gloomy attitude toward artificial intelligence. But I also find it problematic that almost only people who work directly with AI deal with the topic. After all, the consequences affect us all.« Another guest would like to see the World Café format offered more regularly: »I like it a lot better than lengthy panel discussions. The World Café discussions have given me a better sense of where AI is already being used today—and that it has nothing to do with science-fiction-like robots.«
Human Encounter
Most World Café participants did not fundamentally change their opinion on AI after the event, but all of them spoke positively about their discussions with the workshop participants and would be happy about further events of this kind. »The idea for the World Café and the PhD workshop was to put AI in a societal and political context«, says Larissa Höfling after the event, »but at the same time not to be the umpteenth workshop on ethics guidelines for AI. I was amazed that the interdisciplinary dialogue worked so well in the end.« What particularly surprised Larissa, however, was how quickly colleagues become friends: »It feels like you’ve been studying with these people since your first semester.«
It is an interesting area of tension that unfolds over the weekend. On the one hand, artificial intelligence largely takes place in the digital realm: in misinformation campaigns on Twitter, government surveillance software, or automated text recognition on the internet. On the other hand, it becomes clear that the digital realm can never replace human encounter. »All this«, Larissa is certain, »would never have worked so well if we had done it online«, because the atmosphere in Tübingen stimulated the participants’ creativity as no online workshop ever could.
The psychiatrist Prof. Thomas Fuchs writes in his paper ›The Virtual Other‹: »Only the [embodied] other frees me from the cage of my imaginings and projections in which I can only ever encounter myself.« And it is this creative freedom that is essential when it comes to shaping the future of artificial intelligence, Larissa concludes:
»Artificial intelligence is certainly not a panacea. But there are many pressing problems it can help us with. To do that, however, it is crucial to focus public discourse on the problems that matter. Ultimately, we are the stakeholders who need to use AI for the right purposes.«