Mon. Dec 23rd, 2024
A Woman's Face Is Stolen In An Ai Ad That

Michel Janse was on his honeymoon when he learned he was a human clone.

The 27-year-old content creator was with her husband in a rented cabin in snowy Maine when the messages from her followers started trickling in and a YouTube commercial appeared on her to promote an erectile dysfunction supplement. He warned her that she was using his likeness.

In the commercial, Janse (a Christian social media influencer who posts about travel, home decor, and wedding planning) is depicted in a real bedroom, wearing real clothes, but with sexual health concerns. It depicts a non-existent partner with problems.

“Michael spent many years having great difficulty getting and maintaining an erection,” her doppelganger says in the ad.

It appears that a fraudster stole and operated it. her most popular videos — an emotional account of her previous divorce — perhaps because a new wave of artificial intelligence tools are being used to facilitate the creation of realistic deepfakes (an umbrella term referring to media altered or created by AI) It is thought that there are.

With just a few seconds of footage, scammers can now combine video and audio using the following methods: Tools from companies like HeyGen and Eleven Labs generate a synthetic version of a real person’s voice, replace the audio in an existing video, and animate the speaker’s lips. The falsified results become more reliable.

It’s easier and cheaper to create fake videos based on real content, so malicious attackers are picking up videos on social media that match their marketing target audience, and then using stolen personal information. Experts predict that the number of advertisements created using the Internet will increase explosively.

Over the past six months, celebrities including Taylor Swift, Kelly Clarkson, Tom Hanks, and YouTube star Mr. Beast have used their likenesses to promote deceptive diet supplements, dental plan promotions, and iPhone giveaways. I used But as these tools proliferate, even people with modest social media presences are facing similar types of identity theft. In other words, your face and words will be distorted and imposed on you by AI. frequent Offensive products or ideas.

Lukas Hansen, co-founder of CivAI, a nonprofit organization that raises awareness about the risks of AI, said online criminals and state-sponsored disinformation programs are essentially “running small and medium-sized businesses where each attack costs money. “I’m working on it,” he says.but given “The volume” of cheap promotional tools “will increase significantly.”

Ben Coleman, co-founder and CEO of Reality Defender, which helps businesses and governments detect deepfakes, said the technology requires only a small sample to work. .

“When audio, video, and images exist in the public domain, even for just a few seconds, they can be easily copied, altered, or even completely fabricated to make it appear as if something completely unique happened. Yes, you can,” Colman wrote in the text.

Videos are difficult to search and can spread quickly, so victims They didn’t even realize that their portraits were being used.

By the time Olga Roiec, a 20-year-old University of Pennsylvania student, realized she had been cloned for an AI video, nearly 5,000 videos had been distributed on Chinese social media sites. In some of the videos, the scammer used Hagen’s AI cloning tool, according to direct message recordings that Royek shared with The Washington Post.

In December, Loyek saw a video featuring a girl who looked and sounded just like her. The clone was posted on Little Red Book, the Chinese version of Instagram, and said the clone spoke Mandarin, a language Loyek does not know.

Loyek, who was born and raised in Ukraine, said in one video that her clone, named Natasha, sits in front of a statue in the Kremlin and says, “Russia was the best country in the world,” praising President Vladimir Putin. I saw you praising it. “She felt very violated,” Ms. Loyek said in an interview. “These are things I would never do in my life.”

Here, a fake AI clone of Olga Roiec is seen speaking Chinese. (Video: Obtained from The Washington Post)

Representatives for Hagen and Eleven Labs did not respond to requests for comment.

Efforts to prevent this new type of identity theft have been slow. Experts say cash-strapped police departments are ill-equipped to pay for expensive cybercrime investigations or train full-time officers. There is no federal deepfake law, but 3 dozen or more State legislatures are pushing ahead with AI legislation, but proposals to regulate deepfakes are largely limited to political advertising and non-consensual pornography.

Daniel Citron, a professor at the University of Virginia who began sounding the alarm about deepfakes in 2018, said it’s no surprise that the next frontier of the technology would target women.

Some state civil rights laws restrict the use of people’s faces and likenesses in advertising, but litigating is expensive and AI fraudsters around the world are playing “jurisdictional games.” said Citron.

Some victims of social media content theft say they feel helpless with limited resources.

YouTube said this month that it’s still working on allowing users to: request removal The company’s initial policy prohibits AI-generated or other synthetic or modified content that “simulates an identifiable individual, including face and voice.” promised in November.

“As in this case, we have made significant investments in our ability to detect and remove deepfake fraudulent ads and the malicious actors behind them,” spokesperson Nate Fankhauser said in a statement. Latest advertising policy update This allows us to take faster action to terminate the perpetrator’s account. ”

Janse’s management company was able to request that YouTube remove the ad immediately.

But tracking deepfake ads or identifying the culprit can be difficult for those with fewer resources.

Janse’s fake video directed people to a website copyrighted by an organization called Vigor Wellness Pulse. Groove Digital, a Florida-based marketing tools company that provided the free website and was used to create the landing page, said the site was created this month and registered to an address in Brazil.

This page redirects to a long video letter spliced ​​together with snippets of hardcore porn and cheesy stock video footage. The pitch is narrated by an unhappy divorced man who meets a former urologist-turned-playboy who has a secret solution to erectile dysfunction called Boostero, a supplement that can be purchased in capsules. Masu.

Groove CEO Mike Filsaime said of the service: No adult content, only hosts The landing page avoided the company’s detectors because it didn’t have any inappropriate content.

Philsaim, an AI enthusiast and self-proclaimed “Michael Jordan of marketing,” suggested that scammers can search social media sites and use popular videos for their own purposes.

However, the video stolen from Carrie Williams had less than 1,500 likes, making it far from her most popular video.

Last summer, a 46-year-old human resources executive from North Carolina suddenly received a Facebook message. An old friend of hers sent her a screenshot and asked, “Is this you?” Her friend warned her that it was promoting erection-enhancing techniques.

The audio paired with Carrie Williams’ face in the fake AI video was taken from a video ad featuring adult film actress Lana Smalls. (Video: Washington Post)

Williams recognized the screenshot instantly. This was from a TikTok video she posted in 2020 to give advice to her teenage son in the face of kidney and liver failure.

She spent hours searching the news sites her friends claimed to have seen, but found nothing.

Williams stopped searching for the ad last year, but the Post identified her from a Reddit post about deepfakes. She first saw the ad, posted on YouTube, last week in her hotel room on a business trip.

The 30-second spot discussing men’s penis sizes is grainy and badly edited. “She may be happy with you, but deep down she’s definitely in love with someone bigger,” Fake Williams says in the audio recording. From a YouTube video of adult film actress Lana Smalls.

In response to questions from the Post, YouTube suspended the accounts of advertisers associated with Williams’ deepfake. A representative for Smalls did not respond to a request for comment.

Mr. Williams was surprised. Although the quality was poor, it was more explicit than she feared. She was worried about her 19-year-old son. “If he saw it or if his friends saw it, it would be very mortifying,” she said.

“Never in a million years did I think someone would be able to be me,” she said. “I’m just a mom from North Carolina living my life.”

Heather Kelly and Samuel Oakford contributed to this report.