Mon. Dec 23rd, 2024
How Ai Turned A Ukrainian Youtuber Into A Russian
image caption, Olga Loyek saw her face reflected in various videos on Chinese social media

  • author, fan one
  • role, bbc news

“I don’t want anyone to think I’ve ever said such a terrible thing in my life. Using a Ukrainian girl as a face to promote Russia. That’s crazy.”

Olga Loyek has seen her face appear in various videos on Chinese social media thanks to easy-to-use generative AI tools available online.

“I could see faces and hear voices. But it was all so creepy because I saw myself saying things I never said,” says the 21-year-old University of Pennsylvania student. .

Accounts featuring her likeness had dozens of different names, including Sophia, Natasha, April, and Stacey. These “girls” spoke Mandarin, a Chinese language that Olga had never learned. They appeared to be from Russia and talked about Sino-Russian friendship and promoted Russian products.

“I saw about 90% of the videos talking about China and Russia, Sino-Russian friendship, how we must be strong allies, and food advertising.”

One of the largest accounts was “Natasha Imported Foods,” which was followed by over 300,000 users. “Russia is the best country. It’s sad that Russian women want to come to China when other countries are starting to turn their backs on Russia,” says Natasha, promoting products such as Russian candies. said before starting.

This personally infuriated Olga, whose family is still in Ukraine.

But on a broader level, her case draws attention to the dangers of rapidly evolving technology and the real challenge of regulating it and protecting people.

From YouTube to Xiao Hongsho

Olga’s Chinese-speaking AI lookalikes began appearing in 2023. It was shortly after she started her YouTube channel, but it wasn’t updated very regularly.

About a month later, she started receiving messages from people who claimed to have seen her speaking Mandarin on Chinese social media platforms.

Intrigued, she started looking for herself and found AI caricatures of herself on Xiaohongshu (an Instagram-like platform) and Bilibili (a YouTube-like video site).

“There were many [accounts]. Some of her profiles included things like the Russian flag,” said Olga, who has so far found around 35 accounts using her likeness.

After her fiancé tweeted about these accounts, Hagen, the company she claims developed the tools used to create the AI ​​portraits, responded.

They revealed that over 4,900 videos were generated using her face. They said they blocked her image from being used in the future.

A company spokesperson told the BBC that its systems had been hacked and so-called “illicit content” had been created, adding that it immediately updated its security and verification protocols to prevent further abuse of its platform.

But Angela Chan of the University of Hong Kong said what happened to Olga was “a common occurrence in China”.

She said the country is home to “a vast underground economy that specializes in forging, misappropriating, and creating deepfakes of personal data.”

This is despite China being one of the first countries to attempt to regulate AI and the purposes for which it is used. The Civil Code was also revised to protect portrait rights from digital fabrication.

According to statistics released by the Public Security Bureau in 2023, authorities arrested 515 people for “AI face swapping” activities. Chinese courts also handle cases in this area.

image caption, Olga discovered about 35 accounts using her likeness

So how did so many videos of Olga end up online?

One reason for this may be that it promoted the idea of ​​friendship between China and Russia.

Beijing and Moscow have grown significantly closer in recent years. Chinese leader Xi Jinping and Russian President Vladimir Putin said there are “no limits” to the friendship between their countries. The two men are scheduled to meet in China this week.

“While it is unclear whether these accounts were working together with a common purpose, it is wrong to promote messages that are in line with government propaganda,” said Emmy Hein, a legal technology researcher at the universities of Bologna and Leuven. “It will not benefit them.”

“Even if these accounts are not explicitly linked to the Chinese Communist Party, [Chinese Communist Party], promoting a message of integrity may make your post less likely to be removed. ”

But experts warn that this means ordinary people like Olga remain vulnerable and at risk of violating Chinese law.

Kayla Blomquist, a technology and geopolitics researcher at the University of Oxford, said that “individuals are at risk of being framed with artificially generated and politically sensitive content” and that “there is a risk of being framed without due process.” It warns that they may be subject to “rapid penalties enacted.”

He added that the Chinese government’s focus on AI and online privacy policy is on building consumer rights against predatory private actors, but that “people’s rights against their government remain extremely weak.” “Weak,” he emphasized.

“The fundamental goal of China’s AI regulation is to balance maintaining social stability with promoting innovation and economic development,” Hein explains.

“While the regulations appear to be strict on the books, there are some concerns, especially about generation, that may be aimed at creating a more innovation-friendly environment, with the tacit understanding that the law provides the basis for enforcement. There is evidence that AI licensing rules are being selectively enforced “if necessary,” she said.

“Not the last victim.”

image caption, Olga’s AI-generated video infuriated her personally as a Ukrainian

But the impact of Olga’s case extends far beyond China. This illustrates the difficulty of trying to regulate an industry that seems to be evolving at breakneck speed and where regulators are constantly playing catch-up. But that doesn’t mean they aren’t trying.

In March, the European Parliament approved the AI ​​Act, the world’s first comprehensive framework for curbing technology risks. And last October, US President Joe Biden announced an executive order requiring AI developers to share data with the government.

Regulation at the national and international level is moving slowly compared to AI’s rapid growth race, but requires “a clearer understanding and stronger consensus about the most dangerous threats and how to mitigate them,” Blomquist says. he says.

“However, disagreements within and between countries are preventing concrete action. The United States and China are key players, but building consensus and coordinating the necessary joint actions will be difficult.” “Yes,” she added.

Meanwhile, on a personal level, there seems to be little that can be done other than not posting anything online.

“The only thing to do is not give them material to work with. Don’t upload their photos, videos or audio to public social media,” Hein said. “But bad actors always have an incentive to imitate others, so even if governments crack down, we expect to see consistent growth even amid regulatory whack-a-mole.”

Olga is “100% sure” that she will not be the last victim of generative AI. But she is determined that she will not be kicked off the internet.

She shared her experience on her YouTube channel, and some Chinese online users helped her by commenting under the video using her likeness and pointing out that it was fake. They say they are.

She added that many of these videos have now been deleted.

“I wanted to share my story. I wanted people to understand that not everything they see online is real,” she says. “I love sharing my ideas with the world, and no scammer can stop me from doing that.”