Stand up for the facts!

Our only agenda is to publish the truth so you can be an informed participant in democracy.
We need your help.

More Info

I would like to contribute

How improperly using AI could deter you from voting, or hurt your health

A ChatGPT prompt is shown on a device near a public school in Brooklyn, New York, Thursday, Jan. 5, 2023. (AP) A ChatGPT prompt is shown on a device near a public school in Brooklyn, New York, Thursday, Jan. 5, 2023. (AP)

A ChatGPT prompt is shown on a device near a public school in Brooklyn, New York, Thursday, Jan. 5, 2023. (AP)

Loreben Tuquero
By Loreben Tuquero July 17, 2023

Generative AI has exploded onto the scene, raising questions about its role in the creation and spread of misinformation.

Generative AI is a broad term that describes when computers create new content — such as text, photos, videos, music, code, audio and art — by identifying patterns in existing data. The most popular platforms include ChatGPT, Dall-E and Midjourney.

We’ve heard and read about the resulting fake college essays, misleading photos of prominent people and even supposed proof of events that never happened. In May, users circulated a photo they said showed an explosion near the Pentagon, claiming the United States was under attack. Though the story was a hoax that relied on an AI-generated photo, the fallout was real. The photo was shared by social media accounts with large followings, and the stock market briefly dipped.

There are other ways generative AI can cause harm when misused, some that have made headlines — such as creating political ads — and others that have received less attention, such as self-diagnosing medical issues. 

In any form, its misuse can erode trust in institutions and civic processes like voting, experts said.

Sign up for PolitiFact texts

"I think that uncalibrated trust is an issue," said Chenhao Tan, University of Chicago assistant professor of computer science. "People can over rely on AI without proper understanding of what they can or can’t do with AI, or without knowing how to check their results, or giving up their human agency." 

Here are three scenarios to know about when AI can mislead or cause harm.

Using AI to self-diagnose medical issues

Using online tools to assess symptoms isn’t a new phenomenon, and it was exacerbated during the COVID-19 pandemic, when people were encouraged to self-assess and self-triage.

But with tools like ChatGPT, which is a chatbot, people can ask more targeted questions, said Jason Fries, a Stanford University research scientist. One downside is they might not know how to interpret the results they receive.

"A lot of people are not super AI-savvy yet, and they haven't really been primed to think skeptically about what a model is spitting out," said Fries. "So if you sit down in front of ChatGPT and ask questions, there are people that are really surprised that it can just invent information." Fries called it a "real danger mode" to not understand that AI can generate untruthful information. 

Medical care also involves more than just communication of information or diagnostic labels, said Maha Farhat, Harvard Medical School professor of biomedical informatics. The information also needs to be contextualized, she said. 

In some cases, though, the interactive nature of AI may help patients develop a better understanding of a diagnosis, Farhat said. 

A research assistant and two professors from Harvard tested ChatGPT using 45 clinical vignettes that ranged in severity, and which they previously had tested with online symptom checkers. ChatGPT "listed the correct diagnosis within the top three options in 39 of the 45 vignettes," which was 87% of the time, compared with 51% of the time for symptom checkers. 

But the researchers noted several caveats to the results: The vignettes it used are the kind typically used to test medical students, "which may not reflect how the average person might describe their symptoms in the real world," and "ChatGPT’s results are sensitive to how information is presented and what questions are being asked." They concluded that "more rigorous testing is needed."

Farhat agreed, saying the reliability of ChatGPT is not well-measured against the "gold standard" of speaking with a medical professional. Patients can look for information and learn about their symptoms through such tools asChatGPT, she said, but they should not self-medicate or rely on the diagnosis without medical consultation. 

"No medical decisions should be made based on (generative AI) in its current state without additional evidence," she said. 

Farhat said using generative AI for self-diagnosis could lead to misdiagnosis and delays in seeking care, which could worsen health problems. 

2024 elections: AI fakes can make candidates look ‘nefarious’

AI is already changing the campaign landscape in the leadup to the 2024 elections. Politicians such as GOP presidential candidate and Florida Gov. Ron DeSantis, as well as the Republican National Committee, have released AI-generated images and videos. 

In June, DeSantis’ campaign included in a video three images of former President Donald Trump embracing Dr. Anthony Fauci that appeared to be genuine but had been generated by artificial intelligence.

Deepfakes — machine-generated images or videos that change faces, bodies or voices, making people appear to do and say things that they never did or said — already circulate regularly on social media, but the recent rollout of more advanced generative AI tools means it’s easier for people to create them. 

Darrell West, senior fellow of the Center for Technology Innovation at the Brookings Institution, a Washington, D.C., think tank, said he expects to see the creation of more AI-generated videos and audio that makes candidates look "nefarious."

The danger, he said, is that "voters may take such claims at face value and make their voting decision based on that information."

West added, "In a close race, anything that moves a few thousand votes could be decisive." 

How should voters prepare? Rely upon multiple media sources, West advises. And "if something sounds beyond the pale, voters should examine the source and see if it is a credible source of information." 

Eroding ‘trust in the information environment’

Mekela Panditharatne, counsel for the democracy program at the Brennan Center for Justice at New York University School of Law, said generative AI could increase the scale of content aimed at preventing or deterring people from voting. AI makes it possible to automate the creation of convincing, false information about how, when and where to vote, she said.

In 2016, for example, Russia conducted an influence campaign to interfere with the U.S. presidential election. Today, using AI, a similar effort could be executed with fewer resources.

Panditharatne said generative AI platforms draw from existing online data, which contain false claims about the integrity of the 2020 election, mail voting and drop boxes and widespread voter fraud. She said there are concerns that generative AI tools could be exploited to amplify mis- and disinformation online that "seeks to undercut faith in election processes."

"We’re still seeing exactly how AI may impact the elections space, but the presence and the potential for the proliferation of AI-generated content could possibly decrease trust in the information environment overall and make it harder for voters to distinguish between what’s true and false," she said.

Russia’s propaganda model relies on a "fog of confusion" that makes it hard to tell truth from falsehoods, an article co-written by Panditharatne stated in June. It could make voters lose trust in accurate and authoritative sources of election information, the article said.

Experts say correcting information isn’t easy once people have seen it.

"People (don’t) work like a computer. It's not like, flip a switch, and showing you that the situation is false, then that information is actually false," said Tan, of the University of Chicago. "The exposure to the initial misinformation is hard to overcome once it happens."

RELATED: What is generative AI and why is it suddenly everywhere? Here’s how it works

Sign Up For Our Weekly Newsletter

Our Sources

PolitiFact, What is generative AI and why is it suddenly everywhere? Here’s how it works, June 19, 2023

PolitiFact, Fake photo used in claim that the Pentagon was attacked. Pants on Fire!, May 25, 2023

Insider, An apparently AI-generated hoax of an explosion at the Pentagon went viral online — and markets briefly dipped, May 22, 2023

Email interview, Chenhao Tan, assistant professor of computer science at the University of Chicago, June 29, 2023

Email interview, Harith Alani, Director of The Open University Knowledge Media Institute, July 5, 2023

The Conversation, The rise of ‘Dr. Google’: The risks of self-diagnosis and searching symptoms online, Aug. 15, 2022

Scientific American, AI chatbots can diagnose medical conditions at home. How good are they?, March 31, 2023

STAT News, ChatGPT-assisted diagnosis: Is the future suddenly here?, Feb. 13, 2023

Email interview, Jason Fries, research scientist at Stanford University, July 11, 2023

Email interview, Maha Farhat, Gilbert S. Omenn Associate Professor of Biomedical Informatics, Harvard Medical School, July 13, 2023

Email interview, Darrell West, Senior Fellow of the Center for Technology Innovation at the Brookings Institute, July 11, 2023

Email interview, Mekela Panditharatne, counsel for the Brennan Center’s democracy program, July 13, 2023

PolitiFact, How to detect deepfake videos like a fact-checker, April 19, 2023

Brookings Institute, How AI will transform the 2024 elections, May 3, 2023

The New York Times, In an anti-Biden ad, Republicans use A.I. to depict a dystopian future., April 25, 2023

AFP Fact Check, Ron DeSantis ad uses AI-generated photos of Trump, Fauci, June 7, 2023

The New York Times, A.I.’s use in elections sets off a scramble for guardrails, June 25, 2023

PolitiFact, Four things to know about Russia's 2016 misinformation campaign, April 4, 2017

PolitiFact, 2017 Lie of the Year: Russian election interference is a 'made-up story', Dec. 12, 2017

Brennan Center for Justice, How AI puts elections at risk — and the needed safeguards, June 13, 2023

RAND Corporation, The Russian "firehose of falsehood" propaganda model: why it might work and options to counter it, 2016

PolitiFact, Claims that the 2020 election was stolen are still false, May 4, 2022

PolitiFact, Trump’s cascade of falsehoods about voting by mail, Nov. 1, 2020

PolitiFact, Ballot drop boxes were popular in 2020. Then they became a GOP target, May 19, 2021

Browse the Truth-O-Meter

More by Loreben Tuquero

How improperly using AI could deter you from voting, or hurt your health