

'Tool for grifters': AI deepfakes push bogus sexual cures
Holding an oversized carrot, a brawny, shirtless man promotes a supplement he claims can enlarge male genitalia -- one of countless AI-generated videos on TikTok peddling unproven sexual treatments.
The rise of generative AI has made it easy -- and financially lucrative -- to mass-produce such videos with minimal human oversight, often featuring fake celebrity endorsements of bogus and potentially harmful products.
In some TikTok videos, carrots are used as a euphemism for male genitalia, apparently to evade content moderation policing sexually explicit language.
"You would notice that your carrot has grown up," the muscled man says in a robotic voice in one video, directing users to an online purchase link.
"This product will change your life," the man adds, claiming without evidence that the herbs used as ingredients boost testosterone and send energy levels "through the roof."
The video appears to be AI-generated, according to a deepfake detection service recently launched by the Bay Area-headquartered firm Resemble AI, which shared its results with AFP.
"As seen in this example, misleading AI-generated content is being used to market supplements with exaggerated or unverified claims, potentially putting consumers' health at risk," Zohaib Ahmed, Resemble AI's chief executive and co-founder, told AFP.
"We're seeing AI-generated content weaponized to spread false information."
- 'Cheap way' -
The trend underscores how rapid advances in artificial intelligence have fueled what researchers call an AI dystopia, a deception-filled online universe designed to manipulate unsuspecting users into buying dubious products.
They include everything from unverified -- and in some cases, potentially harmful -- dietary supplements to weight loss products and sexual remedies.
"AI is a useful tool for grifters looking to create large volumes of content slop for a low cost," misinformation researcher Abbie Richards told AFP.
"It's a cheap way to produce advertisements," she added.
Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, has observed a surge of "AI doctor" avatars and audio tracks on TikTok that promote questionable sexual remedies.
Some of these videos, many with millions of views, peddle testosterone-boosting concoctions made from ingredients such as lemon, ginger and garlic.
More troublingly, rapidly evolving AI tools have enabled the creation of deepfakes impersonating celebrities such as actress Amanda Seyfried and actor Robert De Niro.
"Your husband can't get it up?" Anthony Fauci, former director of the National Institute of Allergy and Infectious Diseases, appears to ask in a TikTok video promoting a prostate supplement.
But the clip is a deepfake, using Fauci's likeness.
- 'Pernicious' -
Many manipulated videos are created from existing ones, modified with AI-generated voices and lip-synced to match what the altered voice says.
"The impersonation videos are particularly pernicious as they further degrade our ability to discern authentic accounts online," Mantzarlis said.
Last year, Mantzarlis discovered hundreds of ads on YouTube featuring deepfakes of celebrities -- including Arnold Schwarzenegger, Sylvester Stallone, and Mike Tyson -- promoting supplements branded as erectile dysfunction cures.
The rapid pace of generating short-form AI videos means that even when tech platforms remove questionable content, near-identical versions quickly reappear -- turning moderation into a game of whack-a-mole.
Researchers say this creates unique challenges for policing AI-generated content, requiring novel solutions and more sophisticated detection tools.
AFP's fact checkers have repeatedly debunked scam ads on Facebook promoting treatments -- including erectile dysfunction cures -- that use fake endorsements by Ben Carson, a neurosurgeon and former US cabinet member.
Yet many users still consider the endorsements legitimate, illustrating the appeal of deepfakes.
"Scammy affiliate marketing schemes and questionable sex supplements have existed for as long as the internet and before," Mantzarlis said.
"As with every other bad thing online, generative AI has made this abuse vector cheaper and quicker to deploy at scale."
S.Scott--PI