Advertisement

Opinion: AI doesn’t have all the answers — especially this election season

Artificial intelligence apps.
(Olivier Morin / AFP via Getty Images)
Share

With primaries underway, and voters returning in the fall for a high-stakes presidential election, it is likely that many people will be — knowingly or unknowingly — using artificial intelligence platforms to answer questions of where, when and how to vote. In a recent study, we found misleading information about elections abound on these AI platforms. It’s up to tech companies to rein in these discrepancies, but we also need government regulation to hold them accountable.

Voters may use bots like ChatGPT, or search engines that incorporate AI, or the vast array of new AI-based apps and services such as Microsoft Copilot, which is integrated into office software such as Word and Excel, and was found last year to be spewing election lies.

Some actors are subjected to full body scans. Screenwriters worry about their work being digitally repurposed. Those striking against the studios rightly demand stricter controls over artificial intelligence.

Sept. 4, 2023

In January, we gathered about 50 experts — local and state elections officials, researchers, journalists, civil society advocates and tech industry veterans — to test five of the leading closed and open AI models’ responses to common election queries. Among the election officials were two from Los Angeles County, who helped evaluate L.A.-specific responses.

Advertisement

We tested the AI models by connecting to their backend interfaces that were made available to developers. These interfaces don’t always provide the same answers as the chatbot web interfaces, but they are the underlying infrastructures on which the chatbots and other AI services rely.

As AI algorithms are fed novels, our expectations around literary writing will change. But it will lack the passion and eccentricity of humans.

Dec. 31, 2023

The results were dismal: Half of the AI models’ responses to questions voters might ask were rated as inaccurate by our experts.

They made all sorts of errors and made stuff up. Meta’s Llama 2 declared that voters in California could vote by text message (untrue) and even dreamed up a fictional service called “Vote by Text,” adding a wealth of credible-sounding detail.

If 2023 was the year AI broke into the mainstream, 2024 will almost certainly bring a whole new slate of developments. We asked several experts what we can expect.

Jan. 2, 2024

A Meta spokesperson said that “Llama 2 is a model for developers,” and isn’t an outlet the public would use to ask election-related questions. Yet Llama 2 is used by easily accessible web-based chatbots such as Perplexity Labs and Poe.

Mixtral, a French AI model, managed to accurately assert that voting by text is not allowed. But when our tester persisted by asking how to vote by text in California, it returned with an enthusiastic and bizarre “¡Hablo español!” Mixtral’s maker did not respond to requests for comment.

AIs can spit out work in the style of any artist they were trained on — eliminating the need for anyone to hire that artist again.

Dec. 21, 2022

Meanwhile, Google said in December it would prevent its AI model, Gemini, from responding to some election-related queries. We found Gemini to be quite chatty, producing lengthy, definitive-sounding and often inaccurate answers, including links to nonexistent websites and references to imaginary polling places.

Asked where to vote in the ZIP Code 19121, a majority Black neighborhood in North Philadelphia, Gemini argued that no such voting precinct exists — although, of course it does. Such an answer raises concerns about voter suppression. A Google representative told us the company is regularly making technical improvements.

Advertisement

In January, OpenAI also pledged to not misrepresent voting processes and to direct its ChatGPT users to a legitimate source for voting information called CanIVote.org, which is run by the National Assn. of Secretaries of State. In our testing, however, it never once pointed to CanIVote.org and was inaccurate about 19% of the time, such as when it asserted that Texas voters could wear a MAGA hat at the polls (not true). An OpenAI spokesperson said in response that the company is committed to elevating accurate voting information.

There was only one query that all the AI models got right, according to our expert testers: They all were able to provide accurate evidence that the 2020 election was not stolen, probably because the companies have set up content filters to ensure that their software doesn’t repeat conspiracy theories.

Many states are trying to address the problem by passing laws to criminalize the dissemination of disinformation or use of deepfakes in election contexts. The Federal Communications Commission also recently banned AI-generated robocalls. But those laws are hard to enforce, because it’s difficult to identify AI-generated content and even more difficult to track down who made it. And these bans would be aimed at intentional deception, not the routine inaccuracies we discovered.

The European Union recently passed the AI Act, which requires companies to label AI-generated content and develops tools for the detection of synthetic media. But it doesn’t appear to require accuracy for elections.

Federal and state regulations should require companies to ensure their products provide accurate information. Our study suggests that regulators and lawmakers should also scrutinize whether AI platforms are fulfilling their intended uses in critical areas like voter information.

From tech companies, we need more than just pledges to keep chatbot hallucinations away from our elections. Companies should be more transparent by publicly disclosing information about vulnerabilities in their products and sharing evidence of how they are doing so by performing regular testing.

Until then, our limited review suggests that voters should probably steer clear of AI models for voting information. Voters should instead turn to local and state elections offices for reliable information about how and where they can cast their ballots. Elections officials should follow the model of Michigan Secretary of State Jocelyn Benson who, ahead of that state’s Democratic primary election, warned that “misinformation and the ability for voters to be confused or lied to or fooled,” was the paramount threat this year.

Advertisement

With hundreds of AI companies sprouting up, let’s make them compete on the accuracy of their products, rather than just on hype. Our democracy depends on it.

Alondra Nelson is a professor at the Institute for Advanced Study and a distinguished senior fellow at the Center for American Progress. She served as a deputy assistant to the president and acting director of the White House Office of Science and Technology.

Julia Angwin is an award-winning investigative journalist, bestselling author and founder of Proof News, a new nonprofit journalism studio.

Advertisement