11 Aug Political News Is in Its ‘Double-Checking Era’
“Just get off social media, and get your news from real news sources,” said Jonathan Mehta Stein of California Common Cause at a July 12 Ethnic Media Services briefing on A.I.-generated disinformation. “The answer to fake news is real news.” (Screenshot captured by Danielle Parenteau-Decker / The CC Pulse)
By Danielle Parenteau-Decker
Artificial intelligence can create increasingly realistic pictures, videos and audio that are wrong or dangerously misleading.
Ethnic Media Services on July 12 hosted a trio of experts to discuss the impact this disinformation could have on voters.
“We’re now entering the first A.I. election in which A.I. deepfakes and A.I. disinformation has the power to inundate our political discourse and create images, audio and video that voters don’t know if they can trust,” said Jonathan Mehta Stein, executive director of California Common Cause.
So it’s up to people to be skeptical and verify whether what they’re seeing or hearing is actually real.
>>>From Our Archives: Visual Disinformation Can Be Especially Persuasive, Expert Warns<<<
“We are in a double-checking era of political news,” he said. “If you see an image that is too good to be true, if you see a video that helps one political party or one political candidate too much, or something that just doesn’t pass the smell test, I think you have to go offline or get out of the social media environment and Google it, see if it’s being reported in other places.”
However, panel moderator and EMS associate editor Pilar Marrero pointed out that now even Google’s top results are A.I.-controlled, making it even more important to scroll down and look carefully for legitimate results.
Mehta Stein said many people could avoid the issue by changing how they get their news in the first place.
“Just get off social media, and get your news from real news sources,” he said. “The answer to fake news is real news.”
>>>From Our Archives: Social Media Spreads Fake News, but Ethnic Media Can Fight It<<<
The panelists said disinformation is hard to fight, in part, because there’s just so much of it and not enough resources to fight it.
“One of the very goals of disinformation campaigns is to — what Steve Bannon referred to as ‘flooding the zone,’ which is creating so much untrustworthy content that citizens and voters and communities don’t know what to trust at all,” said Brandon Silverman, co-founder and former CEO of CrowdTangle.
CrowdTangle is a tool that lets users track misinformation on Facebook and Instagram. But not for much longer. Meta, which owns the tool and both platforms, announced in March that CrowdTangle would no longer be available after Aug. 14.
>>>Read: As Tech Turns to A.I. to Fight Hate, Experts Warn Human Input Is Critical<<<
And that may be part of a larger pattern.
“As the problem is peaking, many social media platforms are walking away from their responsibility to address it,” Mehta Stein said.
But when platforms do try to stop disinformation, Silverman said it’s hard because a lot of it doesn’t break any rules and “falls into a gray area of misleading but not technically untrue.”
>>>Read: Disinformation Spreads Like a Disease, Expert Says<<<
“It is the difference between saying, ‘The moon is made of cheese’ and saying, ‘Some people are saying the moon is made of cheese,’ ” he explained. “And when you get into the misleading gray area, it gets very hard for platforms to enforce with the degree of clarity and speed and action they can on the directly false ones.”
Mehta Stein said disinformation is a “particular threat to voters of color” as well as immigrants and low-income people.
“For centuries, people have tried to deceive our voters and disenfranchise them and make it harder for them to exercise their right to vote,” he said. “Now, today, with A.I., there are new and crafty ways of doing this.”
>>>From Our Archives: Attacks on Voting Rights Evoke Jim Crow Era<<<
Jingxia Niu, manager of the first Chinese-language fact-checking site in the U.S., said a lot of the disinformation she sees has been “directly translated from English social media.”
The same isn’t true when it’s discredited.
Sometimes, you can find in English whether something has been disproven, “but the translated ones are staying unchecked,” she said. “No one is doing the garbage cleaning work when it comes to in-language (material).”
Niu said she’s asked why she and her colleagues aren’t doing that work themselves.
“We don’t have the capacity,” she said, adding that her site simply doesn’t have enough staff, especially to deal with A.I.-generated images.
One such image she said Chinese right-wing social media influencers widely shared was a fake photo of Donald Trump surrounded by Black supposed supporters.
>>>From Our Archives: Vaccine Disinformation Preys on Black Community’s Well-Earned Distrust<<<
Two other examples Mehta Stein talked about were a deepfake video that seemed to show President Biden saying the U.S. was reactivating the military draft and sending young men and women to fight and fake photos of Trump and Dr. Anthony Fauci. The last of which were shared by Florida Gov. Ron DeSantis’ presidential campaign.
He said there are signs that a visual is fraudulent. In a deepfake, for example, the words being “spoken” — he made air quotes as he said this — may not match the movements of the speaker’s mouth.
>>>Read: ‘Amazing and Scary All at Once’: How Teens Feel About A.I.<<<
A.I.-generated images can stick out because of their imperfections — one speaker mentioned people given six fingers — or because they’re too perfect.
“People look sort of cartoony. Everyone has a perfect sheen. Everyone’s skin is perfect. No one has a pore. Every hair is in place,” Mehta Stein said.
But he said these things can be hard to spot if you’re scrolling fast, on a small screen, aren’t looking closely, or don’t know what to look for. And “the technology is getting better and more realistic.”
>>>From Our Archives: Social Media’s Influence on the News: Don’t Believe Everything You Read on the Internet<<<
That is why, again, people need to “more finely tune their B.S.-meters,” Mehta Stein said.
“They need to be more skeptical. They need to be double-checking. They need to be fact-checking. They need to be making sure that something is real before they share, repost or believe.”
No Comments