Artificial Intelligence Poses a Huge Threat. It's Time to Put a Seatbelt on It
The 2024 presidential election is rapidly approaching, and we aren't prepared for how new artificial intelligence models will affect our elections. Fake, A.I. generated images of former President Trump resisting arrest provided another fresh example of the firehose of lies, deception and false information that AI threatens to unleash on our electoral process. The potential for AI-generated deep-fake videos, text-based chatbots, and voice-synthesis technologies to disrupt elections in 2024 is nearly limitless, especially as the code for powerful AI large language models leak to the worst parts of the internet like 4Chan.
Even AI knows it: We recently asked OpenAI's AI text generator, ChatGPT, about the risks that AI technology could be used to carry out political sabotage. Here's how ChatGPT responded:
"Yes, political sabotage is a real risk of AI applications. As AI becomes more advanced, it could be used to manipulate public opinion, spread propaganda, or launch cyber-attacks on critical infrastructure. AI-powered social media bots can be used to amplify certain messages or opinions, creating the illusion of popular support or opposition to a particular issue. AI algorithms can also be used to create and spread fake news or disinformation, which can influence public opinion and sway elections…”
No doubt campaign lies, immense data gathering, and biased algorithms are not new concepts. What is new is the scale at which these tools can now be used to further polarize our society.
It's led some to call for an outright moratorium on AI development, but to us, that's a bit extreme. Instead, our focus should be on making sure we control AI, and not the other way around. We need to focus especially on how to protect our political system.
One would think the developers of these technologies would be concerned about bringing a new Frankenstein monster into the world, and would take every step possible to protect us from their vulnerability to abuse. It's not a heavy lift; just ask ChatGPT, like we did: We posed the question of whether OpenAI could label its output so that people would know content was generated by AI rather than by a real person. ChatGPT immediately responded:
"Yes! OpenAI and other AI companies could add digital watermarks and metadata to label content as generated by AI, and make the labels nearly-indelible through encryption…"