Mon. Dec 23rd, 2024
Google Pixel's Face Altering Photo Tool Sparks Ai Manipulation Debate

image source, Getty Images

image caption,

It’s never easy to capture everyone properly when taking a group photo

Cameras never lie. Of course, that’s an exception, but it seems to be happening more frequently with each passing day.

In the age of smartphones, on-the-fly digital editing to enhance photos has become commonplace, from enhancing colors to fine-tuning light levels.

Now, a new class of smartphone tools powered by artificial intelligence (AI) are further deepening the conversation about what it means to photograph reality.

Google’s latest smartphones, the Pixel 8 and Pixel 8 Pro, released last week, are a step ahead of other devices. They use his AI to help change the facial expressions of people in photos.

We’ve all been there: in a group shot, one person looks away from the camera or doesn’t smile. Google’s phones can now examine your photos, combine past expressions, and use machine learning to bring smiles from other photos into your photos. Google calls it Best Take.

The device also allows users to erase, move, and resize unwanted elements in photos (from people to buildings), and “fill in” the remaining space with something called Magic Editor. Masu. It uses what is known as deep learning, which uses knowledge gleaned from millions of other photos to figure out which textures fill in the gaps by analyzing the visible surrounding pixels. It is essentially an artificial intelligence algorithm that calculates.

It doesn’t have to be a photo taken with a device. Pixel 8 Pro allows you to apply the so-called Magic Editor or Best Take to any photo in your Google Photos library.

“It’s disgusting and creepy.”

For some observers, this raises new questions about how we take photos.

Andrew Pearsall, a professional photographer and senior lecturer in journalism at the University of South Wales, agreed that there are risks to AI manipulation.

“One simple manipulation, even for aesthetic reasons, can lead us down a dark path,” he says.

He said the risks are greater for those using AI in a professional context, but there are implications for everyone to consider.

“We have to be very careful about when we cross the line.

“It’s very worrying that you can now take a picture on your phone and immediately delete something. I think we’re entering a kind of realm of falsehood. ”

Google’s Isaac Reynolds, who leads the team developing the company’s smartphone camera systems, told the BBC that the company takes the ethical considerations of consumer technology seriously.

He was quick to point out that features like Best Takes aren’t “fabricating” anything.

image caption,

This photo was edited using Google’s AI Magic Editor to change the position and size of the people in the foreground.

Camera quality and software are key for the company to compete with the likes of Samsung and Apple, and these AI features are seen as unique selling points.

And all of the reviewers who expressed concerns about the technology praised the camera system’s photo quality.

“Finally, everyone can take shots that look the way they want to look, and that’s something no smartphone camera or any camera could ever do,” Reynolds said.

“If there is a version [of the photo you’ve taken] I’ll show you where that person was smiling. But if there wasn’t a version where they laughed, then yeah, you wouldn’t see it,” he explained.

For Reynolds, the final image becomes a “representation of the moment.” In other words, that particular moment may not have actually happened, but it’s the photo you wanted, created from multiple real-life moments.

“People don’t want reality”

Professor Rafal Mantiuk, a graphics and display expert at the University of Cambridge, said it was important to remember that the use of AI in smartphones was not about making photos look like reality.

“People don’t want to grasp reality,” he says. “They want to capture beautiful images. The entire image processing pipeline on smartphones is aimed at producing images that look good, not real images.”

Due to the physical limitations of smartphones, they rely on machine learning to “fill in” information that isn’t present in the photo.

This improves zooming, improves low-light photos, and in the case of Google’s Magic Editor feature, adds elements to photos that weren’t there, replaces a frown with a smile, and more. Replaced with other photo elements.

Photo manipulation isn’t new; it’s as old as the art form itself. But thanks to artificial intelligence, augmenting reality has never been easier.

early this year Samsung faced criticism It was recognized how deep learning algorithms were used to improve the quality of photos of the moon taken with smartphones. In our testing, we found that it doesn’t matter how poor the initial image you take, it always gives you a usable image.

In other words, your photo of the moon is not necessarily the photo of the moon you were looking at.

The company acknowledged the criticism and said it was working to “reduce any confusion that may arise between the act of taking real moon photos and images of the moon.”

Regarding Google’s new technology, Reynolds said the company is adding metadata (an image’s digital footprint) to photos using industry standards to flag when AI is being used. .

“This is an issue that we’re talking about internally. And we’ve talked about it at length, because we’ve been working on these things for years. This is a conversation, and we’ve talked about it at length. We listen to our users,” he says.

Google clearly believes users will agree. The new mobile phone’s AI capabilities are at the heart of the advertising campaign.

So, is there a line Google won’t cross when it comes to image manipulation?

Reynolds said the debate over the use of artificial intelligence is too nuanced to simply draw a line in the sand and say it’s gone too far.

“As you get deeper into building features, you eventually start to realize that feature-by-feature decisions are oversimplifying something that is very difficult,” he says.

While these new technologies raise ethical considerations about what is real and what is not, Professor Mantiuk said we also need to consider the limitations of our own eyes.

“The fact that we see vivid, colorful images is because our brains can reconstruct information and infer missing information,” he said.

“So while some people may complain that the camera is doing a ‘fake’ thing, the human brain is actually doing the same thing in a different way.”