AI and democracy: Maybe not true is bad too

AI and democracy: Maybe not true is bad too

You are reading the AI ​​newsletter “Naturally Intelligent” from September 5, 2024. To receive the newsletter every second Thursday by email, sign up here.

Is what I see in the picture real? Or a fake?

This question is not new. People have been asking it since photography was invented. Since Photoshop was invented. And even more so since artificial intelligence made it possible to create photo-like images in a matter of seconds.

In recent weeks, it has become clear that the concern that AI is increasingly blurring the lines between reality and fakery is entirely justified. Deepfakes, generated with the AI ​​of the Freiburg-based start-up Black Forest Labs, flooded the internet in mid-August: Joe Biden in diapers, Elon Musk in a Nazi uniform, a pregnant Kamala Harris.

Then Google launched the Pixel 9, a smartphone with AI functionality that allows anyone to dramatically alter photos taken with just a few clicks: place a pile of white powder next to a person, make any building explode.

What's impressive about it is that the lighting and proportions on the AI-generated images are usually correct and they are sharp. It's also less common for a person to have a sixth finger grow out of their hand. Distinguishing fake images from reality could become increasingly difficult in the future.

Experts have long warned that generative AI will make it easier to create mass disinformation, and that it could be used to influence voters and discredit politicians and private individuals alike.

US presidential candidate Donald Trump recently shared images that were intended to show that singer Taylor Swift and her fans support him. It is not difficult to imagine what it means for a democracy when even political actors allow themselves to be carried away by the spread of such images.

But the matter is not quite that simple.

Individuals, groups or states are trying to influence election campaigns with AI. Examples from Indonesia to Slovakia and now the USA show this. But studies suggest that so far the efforts do not seem to have been particularly successful.

In an investigation of more than a hundred national elections this year and last year, researchers from the Alan Turing Institute in the UK found that only 19 of them were influenced by AI. In addition, the researchers say, there were no clear deviations between the actual election results and the expected results based on poll data. The researchers conclude: “The impact of AI on election results is currently limited.”

But what happens if almost everything we see on the Internet could, even theoretically, be fake?

Current advances in AI could mark a turning point that fundamentally changes what and who we trust.

Because in a world where everything can be fake, every image, every video, every voice – in this world it is also easier to deny reality. The mere possibility that AI content could be in circulation leads people to view real images, videos and audio files as inauthentic. Anyone will be able to say: “This was created by AI, this is fake” – and thus more easily evade responsibility. Scientists call this phenomenon the “Liar's Dividend”. The more normal deepfakes and other AI images become, the more certain a society becomes about its information and facts. Everything loses an authenticity.

An example from mid-August shows that this is not just pure theory. At a campaign rally at the airport in Detroit, thousands of cheering people welcomed US presidential candidate Kamala Harris. Participants and photographers captured the crowd from many angles.

That didn't stop her rival Donald Trump from claiming that the crowd seen in a photo of the rally in front of Harris' plane was created using AI. “There was no one at the plane,” Trump wrote on his platform Truth Social. Harris had generated the image using AI.

Individuals and states with malicious intent will use AI for their own ends, so it is important to be aware of the direct threats the technology poses to democracy and elections.

But how we talk about the technology is also important. Blind fear of AI can be dangerous. It is difficult to imagine a democracy without facts and without a common reality that everyone believes in.

Links for further reading

  • To what extent could AI influence the US elections? Researchers for the US Brookings Institution consider the pros and cons: Misunderstood Mechanisms: How AI, TikTok and the Liar Dividend Could Impact the 2024 Election
  • The influence that incorrect answers from chatbots can have on elections and our perception is what the Atlantic-Author Matteo Wong: Chatbots are primed to distort reality (The Atlantic)
  • Generative AI is not like PhotoshopIt’s worse, argues journalist Jess Weatherbed: Hi, you are here because you said AI image editing is just like Photoshop (The Verge)

About AI

  • Columnist Kevin Roose tries to change the way people think about his chatbots – and in doing so reveals a new industry: How do you change a chatbot's mind? (The New York Times)
  • AI could change internet searches. Companies like OpenAI, Perplexity and Google itself are working on this. This puts media companies in a dilemma: Google's AI search presents websites with a difficult choice: share data or die (Bloomberg)
  • How exactly Large Language Models arrive at their answers is unclear even to their developers. But there is progress: Researchers discover how large language models work (The Economist)

Playing around with AI

  • At Fal.ai you can test the AI ​​image generator Flux from Black Forest Labs, which is considered the best in the world:
  • Have conversations with Google – now possible with Gemini Live on an Android device and Google’s AI subscription:
  • Few have tried OpenAI's new AI search engine, which could pose a threat to Google. Washington Post spoke to testers and provided initial insights: Few have tried OpenAI's Google killer. Here's what they think. (The Washington Post)