So, Who isCasting Your Vote???

Election disinformation takes a big leap with AI being used to deceive worldwide

From Bangladesh to Slovakia, AI-generated deepfakes have been undermining elections around the globe. Experts say their reach and sophistication is a sign of things to come in consequential elections later this year. (March 15)

LONDON (AP) — Artificial intelligence is supercharging the threat of election disinformation worldwide, making it easy for anyone with a smartphone and a devious imagination to create fake – but convincing – content aimed at fooling voters.

It marks a quantum leap from a few years ago, when creating phony photos, videos or audio clips required teams of people with time, technical skill and money. Now, using free and low-cost generative artificial intelligence services from companies like Google and OpenAI, anyone can create high-quality “deepfakes” with just a simple text prompt.

Experts warn AI and deepfakes will likely be worse in the coming elections.
Here’s how governments and organizations are responding to the threat.

AI-powered misinformation and disinformation is emerging as a risk as people in a slew of countries head to the polls. Read more on the 25 elections in 2024 that could change the world, and take a look at more of the AP’s global elections coverage.

A wave of AI deepfakes tied to elections in Europe and Asia has coursed through social media for months, serving as a warning for more than 50 countries heading to the polls this year.

“You don’t need to look far to see some people … being clearly confused as to whether something is real or not,” said Henry Ajder, a leading expert in generative AI based in Cambridge, England.

The question is no longer whether AI deepfakes could affect elections, but how influential they will be, said Ajder, who runs a consulting firm called Latent Space Advisory.

As the U.S. presidential race heats up, FBI Director Christopher Wray recently warned about the growing threat, saying generative AI makes it easy for “foreign adversaries to engage in malign influence.”

People are reflected in a window of a hotel at the Davos Promenade in Davos, Switzerland, Jan. 15, 2024. (AP Photo/Markus Schreiber, File)
People are reflected in a window of a hotel at the Davos Promenade in Davos, Switzerland, Jan. 15, 2024. (AP Photo/Markus Schreiber, File)

With AI deepfakes, a candidate’s image can be smeared, or softened. Voters can be steered toward or away from candidates — or even to avoid the polls altogether. But perhaps the greatest threat to democracy, experts say, is that a surge of AI deepfakes could erode the public’s trust in what they see and hear.

Some recent examples of AI deepfakes include:

— A video of Moldova’s pro-Western president throwing her support behind a political party friendly to Russia.

— Audio clips of Slovakia’s liberal party leader discussing vote rigging and raising the price of beer.

— A video of an opposition lawmaker in Bangladesh — a conservative Muslim majority nation — wearing a bikini.

 

The novelty and sophistication of the technology makes it hard to track who is behind AI deepfakes. Experts say governments and companies are not yet capable of stopping the deluge, nor are they moving fast enough to solve the problem.

As the technology improves, “definitive answers about a lot of the fake content are going to be hard to come by,” Ajder said.

ERODING TRUST

Some AI deepfakes aim to sow doubt about candidates’ allegiances.

In Moldova, an Eastern European country bordering Ukraine, pro-Western President Maia Sandu has been a frequent target. One AI deepfake that circulated shortly before local elections depicted her endorsing a Russian-friendly party and announcing plans to resign.

FILE - Moldova's President Maia Sandu, right, greets Ukraine's President Volodymyr Zelenskyy in Bulboaca, Moldova, June 1, 2023. She has been a frequent target of online disinformation created with artificial intelligence. (AP Photo/Vadim Ghirda, File)
Moldova’s President Maia Sandu, right, greets Ukraine’s President Volodymyr Zelenskyy in Bulboaca, Moldova, June 1, 2023.(AP Photo/Vadim Ghirda, File)

Officials in Moldova believe the Russian government is behind the activity. With presidential elections this year, the deepfakes aim “to erode trust in our electoral process, candidates and institutions — but also to erode trust between people,” said Olga Rosca, an adviser to Sandu. The Russian government declined to comment for this story.

China has also been accused of weaponizing generative AI for political purposes.

In Taiwan, a self-ruled island that China claims as its own, an AI deepfake gained attention earlier this year by stirring concerns about U.S. interference in local politics.

The fake clip circulating on TikTok showed U.S. Rep. Rob Wittman, vice chairman of the U.S. House Armed Services Committee, promising stronger U.S. military support for Taiwan if the incumbent party’s candidates were elected in January.

FILE - Rep. Rob Wittman, R-Va., questions witnesses during a congressional hearing, on Capitol Hill, Tuesday, Feb. 28, 2023, in Washington. A fake clip circulating on TikTok showed Wittman, vice chairman of the U.S. House Armed Services Committee, promising stronger U.S. military support for Taiwan if the incumbent party's candidates were elected in January, 2024. (AP Photo/Alex Brandon, File)
Rep. Rob Wittman, R-Va., questions witnesses during a congressional hearing, on Capitol Hill, Tuesday, Feb. 28, 2023, in Washington.(AP Photo/Alex Brandon, File)

Wittman blamed the Chinese Communist Party for trying to meddle in Taiwanese politics, saying it uses TikTok — a Chinese-owned company — to spread “propaganda.”

A spokesperson for the Chinese foreign ministry, Wang Wenbin, said his government doesn’t comment on fake videos and that it opposes interference in other countries’ internal affairs. The Taiwan election, he stressed, “is a local affair of China.”

BLURRING REALITY

Audio-only deepfakes are especially hard to verify because, unlike photos and videos, they lack telltale signs of manipulated content.

In Slovakia, another country overshadowed by Russian influence, audio clips resembling the voice of the liberal party chief were shared widely on social media just days before parliamentary elections. The clips purportedly captured him talking about hiking beer prices and rigging the vote.

It’s understandable that voters might fall for the deception, Ajder said, because humans are “much more used to judging with our eyes than with our ears.”

In the U.S., robocalls impersonating U.S. President Joe Biden urged voters in New Hampshire to abstain from voting in January’s primary election. The calls were later traced to a political consultant who said he was trying to publicize the dangers of AI deepfakes.

FILE - Paul Carpenter describes AI software during an interview in New Orleans, Friday, Feb. 23, 2024. Carpenter says he was hired in January to use AI software to imitate President Joe Biden's voice to convince New Hampshire Democrat voters not to vote in the state's presidential primary. (AP Photo/Matthew Hinton)
Paul Carpenter describes AI software during an interview in New Orleans, Friday, Feb. 23, 2024. Carpenter says he was hired in January to use AI software to imitate President Joe Biden’s voice to convince New Hampshire Democrat voters not to vote in the state’s presidential primary. (AP Photo/Matthew Hinton, File)

In poorer countries, where media literacy lags, even low-quality AI fakes can be effective.

Such was the case last year in Bangladesh, where opposition lawmaker Rumeen Farhana — a vocal critic of the ruling party — was falsely depicted wearing a bikini. The viral video sparked outrage in the conservative, majority-Muslim nation.

“They trust whatever they see on Facebook,” Farhana said.

Rumeen Farhana, a politician from the main opposition Bangladesh Nationalist Party (BNP) sits for a photograph during an interview at her residence in Dhaka, Bangladesh, Thursday, Feb. 15, 2024. Farhana, a vocal critic of the ruling party, was falsely depicted wearing a bikini in a video created using artificial intelligence. The viral video sparked outrage in the conservative, majority-Muslim nation. (AP Photo/Al-emrun Garjon)
Rumeen Farhana, a politician from the main opposition Bangladesh Nationalist Party (BNP) sits for a photograph during an interview at her residence in Dhaka, Bangladesh, Thursday, Feb. 15, 2024. (AP Photo/Al-emrun Garjon)

Experts are particularly concerned about upcoming elections in India, the world’s largest democracy and where social media platforms are breeding grounds for disinformation.

A CHALLENGE TO DEMOCRACY

Some political campaigns are using generative AI to bolster their candidate’s image.

In Indonesia, the team that ran the presidential campaign of Prabowo Subianto deployed a simple mobile app to build a deeper connection with supporters across the vast island nation. The app enabled voters to upload photos and make AI-generated images of themselves with Subianto.

As the types of AI deepfakes multiply, authorities around the world are scrambling to come up with guardrails.

Noudhy Valdryno, the digital coordinator for the campaign team of Indonesian presidential frontrunner Prabowo Subianto, shows the interface of a web application that allows supporters to upload photos to make AI-generated images of them with Subianto, in Jakarta, Indonesia, Wednesday, Feb. 21, 2024. (AP Photo/Dita Alangkara)
Noudhy Valdryno, the digital coordinator for the campaign team of Indonesian presidential frontrunner Prabowo Subianto, shows the interface of a web application that allows supporters to upload photos to make AI-generated images of them with Subianto, in Jakarta, Indonesia, Wednesday, Feb. 21, 2024. (AP Photo/Dita Alangkara)

The European Union already requires social media platforms to cut the risk of spreading disinformation or “election manipulation.” It will mandate special labeling of AI deepfakes starting next year, too late for the EU’s parliamentary elections in June. Still, the rest of the world is a lot further behind.

The world’s biggest tech companies recently — and voluntarily — signed a pact to prevent AI tools from disrupting elections. For example, the company that owns Instagram and Facebook has said it will start labeling deepfakes that appear on its platforms.

But deepfakes are harder to rein in on apps like the Telegram chat service, which did not sign the voluntary pact and uses encrypted chats that can be difficult to monitor.

Some experts worry that efforts to rein in AI deepfakes could have unintended consequences.

An advertising banner with a slogan about AI is fixed at a building at the Davos Promenade, alongside the World Economic Forum in Davos, Switzerland, Jan. 18, 2024.  (AP Photo/Markus Schreiber, File)
An advertising banner with a slogan about AI is fixed at a building at the Davos Promenade, alongside the World Economic Forum in Davos, Switzerland, Jan. 18, 2024. (AP Photo/Markus Schreiber, File)

Well-meaning governments or companies might trample on the sometimes “very thin” line between political commentary and an “illegitimate attempt to smear a candidate,” said Tim Harper, a senior policy analyst at the Center for Democracy and Technology in Washington.

Major generative AI services have rules to limit political disinformation. But experts say it remains too easy to outwit the platforms’ restrictions or use alternative services that don’t have the same safeguards.

Even without bad intentions, the rising use of AI is problematic. Many popular AI-powered chatbots are still spitting out false and misleading information that threatens to disenfranchise voters.

And software isn’t the only threat. Candidates could try to deceive voters by claiming that real events portraying them in an unfavorable light were manufactured by AI.

“A world in which everything is suspect — and so everyone gets to choose what they believe — is also a world that’s really challenging for a flourishing democracy,” said Lisa Reppell, a researcher at the International Foundation for Electoral Systems in Arlington, Virginia.

Swenson reported from New York. Associated Press writers Julhas Alam in Dhaka, Bangladesh, Krutika Pathi in New Delhi, Huizhong Wu in Bangkok, Edna Tarigan in Jakarta, Indonesia, Dake Kang in Beijing, and Stephen McGrath in Bucharest, Romania, contributed to this report.

from:    https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd

 

The Power of Astroturfing

How Astroturfing and Other Media Manipulation Compromise Your Ability to Get Truthful Information

Analysis by Dr. Joseph Mercola 

STORY AT-A-GLANCE

  • Ninety percent of news media are controlled by six corporations. As a result, the vast majority of what you read, see and hear is part of a carefully orchestrated narrative created and controlled by special interest groups
  • “Astroturf” is the effort on the part of large corporate special interests to surreptitiously sway public opinion by making it appear as though it’s a grassroots effort for or against a particular agenda
  • Wikipedia is astroturf’s dream come true. Many pages are controlled by anonymous Wikipedia editors on behalf of special interests who forbid and reverse edits that go against their agenda

Editor’s Note: This article is a reprint. It was originally published October 28, 2017.

Ninety percent of news media, be it television, radio, print or online, are controlled by six corporations. As a result, the vast majority of what you read, see and hear is part of a carefully orchestrated narrative created and controlled by special interest groups.

When you combine that with other astroturf and public manipulation schemes that hide the identity of these special interests, the end result is, to use investigative journalist Sharyl Attkisson’s term, a Truman-esque fictitious reality, where medical journals, doctors, media and presumably independent consumer groups all seem to be in agreement. The problem is it may all be false.

Attkisson is a five-time Emmy Award-winning anchor, producer and reporter whose television career spans more than three decades. In 2009, she blew the lid off the swine flu media hype, showing the hysteria was manufactured and completely unfounded. At the time, I interviewed her about these findings. I’ve included that fascinating interview below.

In 2014, she left CBS to pursue more independent venues of investigative journalism, and wrote “Stonewalled: My Fight for Truth Against the Forces of Obstruction, Intimidation, and Harassment in Obama’s Washington” — an exposé on what really goes on behind the media curtain.

Why Everyone Must Be Aware of Astroturfing

The featured video is a TEDx Talk Attkisson gave in 2015, in which she discusses the methods employed by special interest groups to manipulate and distort media messages. For example, astroturfing — false-front “grassroots movements” that are in fact funded by political parties or private industries — are now “more important to these interests than traditional lobbying of Congress,” she says. She explains the term “astroturf” thus:

“It’s a perversion of grassroots, as in fake grassroots. Astroturf is when political, corporate or other special interests disguise themselves and publish blogs, start Facebook and Twitter accounts, publish ads and letters to the editor, or simply post comments online, to try to fool you into thinking an independent or grassroots movement is speaking.

The whole point of astroturf is to try to [give] the impression there’s widespread support for or against an agenda when there’s not. Astroturf seeks to manipulate you into changing your opinion by making you feel as if you’re an outlier when you’re not …

Astroturfers seek to controversialize those who disagree with them. They attack news organizations that publish stories they don’t like, whistleblowers who tell the truth, politicians who dare to ask the tough questions and journalists who have the audacity to report on all of it.”

Wikipedia — Astroturf’s Dream Come True

If you’re like most, you probably rely on certain sources more than others when it comes to information. WebMD, for example, dominates for health information, Snopes for checking the latest rumors and Wikipedia for general facts, figures and details.

Attkisson has a great deal to say about Wikipedia, calling it “astroturf’s dream come true.” Wikipedia is advertised as a free encyclopedia, where information is added and edited by the public. Anyone can add to or edit any given Wikipedia page. Or so they say.

“The reality can’t be more different,” Attkisson says, explaining that many pages have been co-opted and are controlled by anonymous Wikipedia editors on behalf of special interests. “They forbid and reverse edits that go against their agenda,” she says. “They skew and delete information, in blatant violation of Wikipedia’s own established policies, with impunity.”

Even the smallest factual inaccuracies are impossible to correct on these agenda-driven pages. As just one example, in 2012, author Philip Roth tried to correct a factual error about the inspiration behind one of his book characters cited on a Wikipedia page. His correction was repeatedly reversed and, ultimately, he was told he was not considered a credible source!

Worse, a study1 comparing medical conditions described on Wikipedia with published research found that Wikipedia contradicted the medical literature an astounding 90% of the time. So, be aware — Wikipedia is NOT the place for accurate and reliable medical information.

Who’s Who and What’s What?

The extent to which information is manipulated is enormous. Let’s say you hear about a new drug for an ailment you have, or your doctor recommends it, and you decide to research it to be on the safe side. Ultimately, you conclude it is safe and effective because everywhere you look, the information seems to support this conclusion. You feel good knowing you’ve done your homework, and fill the prescription. What you don’t know is that:

Facebook and Twitter pages speaking highly of the drug are run by individuals on the payroll of the drug company
The Wikipedia page for the drug is monitored and controlled by a special-interest editor hired by the drug company
Google search engine results have been optimized, ensuring you’ll find all those positive sources while burying contradicting information
The nonprofit organization you stumbled across online that recommends the drug was secretly founded and funded by the drug company
The positive study you found while searching online was also financed by the drug company
The news articles reporting the positive findings of that study sound suspiciously alike for a reason — they’re reiterating information provided by the drug company’s PR department; hence, you will not find any contradictory information there either
Doctors promoting the drug and making derogatory comments about those who worry about side effects are actually paid consultants for the drug company
The medical lecture your own personal doctor attended, where he became convinced the drug is safe and efficacious, was also sponsored by the drug company

How to Identify Astroturf

Believe it or not, this is just the tip of the iceberg. The extent of the control and manipulation goes even deeper than this. Even the U.S. government, regulatory agencies and public health organizations are colluding with industry in a variety of different ways.

So, what can you do? How can you possibly decipher the truth when the truth is so well-hidden beneath layers of astroturf? As noted by Attkisson, recognizing the telltale signs of astroturf is key. And once you know what to look for, you’ll start to recognize it everywhere you look. Telltale signs and hallmarks of astroturf include the following:

  • Certain key message lines repeatedly crop up. For example, the line “talk to your doctor” is highly suggestive of a PR message for a drug, even if what you’re reading doesn’t look like an advertisement
  • Use of inflammatory and derogatory language. Keywords to look for include crank, quack, nutty, lies, paranoid, pseudo and conspiracy
  • Astroturfers will often claim to debunk “myths” that are not myths at all
  • They will attack people, personalities and organizations rather than address the facts or concerns in question
  • Astroturfers are skeptical of those exposing wrongdoing rather than the wrongdoers. As noted by Attkisson, rather than questioning authority, astroturfers question those who question authority

Astroturfing in Action

A perfect example of astroturfing occurred in 2015, when the American Council for Science and Health (ACSH) — a pro-GMO front group — attacked Dr. Mehmet Oz for reporting on the now scientifically established hazards of glyphosate.

Mainstream media swallowed and regurgitated the vicious propaganda without any critical thought whatsoever. Slate magazine publicized the attack with the headline “Letter from Prominent Doctors Implies Columbia Should Fire Dr. Oz for Being a Quack.”

The letter accuses Oz of repeatedly showing “disdain for science and for evidence-based medicine, as well as baseless and relentless opposition to the genetic engineering of food crops.” The letter was signed by Dr. Henry I. Miller and nine other “distinguished physicians.”

What the media failed to address is that Miller is a well-known shill for the GMO industry. In his capacity as its frontman, he was caught misrepresenting himself during the Anti-Prop 37 campaign in 2012, pretending to be a Stanford professor opposing GMO labeling, when in fact he is not a professor at Stanford.2 The TV ad had to be pulled off the air because of this misrepresentation.

Learn to Identify Shills and Front Groups

Miller also has a long history of defending toxic chemicals such as DDT (dichlorodiphenyltrichloroethane), in addition to defending Big Tobacco. He’s even penned articles suggesting radioactive fallout might be beneficial for health, while claiming “Organic agriculture is to the environment what cigarette smoking is to human health” — apparently momentarily forgetting he’s defended the safety of cigarette smoking.3

Miller’s true colors were also revealed in August 2017, when he was fired by Forbes magazine for submitting articles ghostwritten by Monsanto. The evidence4 against Miller emerged during the court-ordered discovery process of a class-action lawsuit against Monsanto by people who claim they developed Non-Hodgkin lymphoma as a result of glyphosate exposure (the active ingredient in Monsanto’s Roundup herbicide, used by farmers and home gardeners alike).

The documents, more than 700 pages in all, were posted online by the law firm Baum Hedlund Aristei & Goldman (now known as Wisner Baum).5 Faced with evidence they’d published material under Miller’s name that was in fact ghostwritten by Monsanto, Forbes not only fired Miller but also removed all of his work from their site. Some of the other nine physicians that signed the letter against Oz are also less than distinguished. As noted by U.S. Right to Know:6

“One was stripped of his medical license in New York and sent to federal prison camp for Medicaid fraud. Yet Dr. Gilbert Ross plays up his M.D. credentials in his role as acting president of [ACSH]. Ross was joined on the Columbia letter by ACSH board member Dr. Jack Fisher.

So what is ACSH? Though some reporters treat it as an independent science source, the group has been heavily funded by oil, chemical and tobacco companies, and has a long history of making inaccurate statements about science that directly benefit those industries — for example, claiming that secondhand smoke isn’t linked to heart attacks [or that] fracking doesn’t pollute water …

These facts are relevant in stories about scientific integrity. The scientific accuracy and motivations of the accusers matter when they are publicly challenging the scientific accuracy and motivations of somebody they are trying to get fired. We urge reporters and editors to take a closer look at the sources selling them story ideas, and to act as better watchdogs for the public interest.”

In short, the attack on Oz was orchestrated not by “concerned physicians” but rather by industry shills whose job it is to attack anyone who embraces a more natural approach to health and/or raise damning questions that might hurt the industry’s bottom-line.

Corporate Conflicts of Interest Killed Investigative Journalism

I interviewed Attkisson about her book, “Stonewalled” and the downfall and deterioration of true investigative journalism in 2015. I’ve included that interview again for your convenience. It delves a lot further into the issues brought up in her 10-minute TEDx Talk.

For example, direct-to-consumer drug advertising has created a situation where drug companies in particular wield enormous power over media. The industry spends billions of dollars each year on advertising, and this financial windfall is typically enough for any media outlet to bend to its advertisers’ whims and desires.

Attkisson refers to this as “soft censorship.” It’s when a media outlet’s sponsors wield power at the corporate level over the types of stories and topics journalists are allowed to cover, and the slant they must take when doing so. It’s important to realize that you simply will not get the truth from the media on certain topics for this very reason.

My mission is to arm you with information that is not easily obtainable in the mainstream media about things that influence your health, for better or worse. All of my articles are carefully referenced and I rely on peer-reviewed published science and firsthand interviews with experts in various fields. My advice to you is to develop a keen eye for the hallmarks of astroturfing, and to dig deeper when faced with claims that “the science is settled.” More often than not, it’s anything but.

from:  https://articles.mercola.com/sites/articles/archive/2024/03/29/astroturfing-media-manipulation.aspx?ui=f460707c057231d228aac22d51b97f2a8dcffa7b857ec065e5a5bfbcfab498ac&sd=20211017&cid_source=dnl&cid_medium=email&cid_content=art3HL&cid=20240329_HL2&foDate=true&mid=DM1549869&rid=2082323638