Mon. Dec 23rd, 2024
Google's Ai Profile Problems May Be Unfixable, Experts Say

Happy Wednesday! Send your news tips to [email protected].

Google’s AI search problem may never be fully solved.

It’s not unusual for exciting new tech features to debut with bugs, but at least some of the issues with Google’s new generative AI-powered search answers may not be fixable anytime soon, five AI experts told Tech Brief on Tuesday.

Last week, Google’s new “AI Overviews” made headlines for all the wrong reasons. Touted as the future of online search, the feature sees Google’s software answering users’ questions directly rather than just linking them to relevant websites, but it spits out answers that range from ridiculous to dangerous. (No, geologists aren’t recommending eating one tiny rock a day, and no, don’t put glue on your pizza.)

Google initially downplayed the problem, saying the majority of its AI-generated summary searches were “high quality” and that some of the examples circulating on social media were probably fake. But the company also acknowledged that it was manually removing at least some of the problematic results, a daunting task for a site that processes billions of queries a day.

“AI Overview is designed to surface high-quality information, supported by results from across the web, with prominent links to learn more,” the spokesperson said. Ned Adrians He said on Tuesday. “As with other features we’ve released in Search, we’re using your feedback to drive broader improvements to the system, some of which have already started rolling out.”

This shows that the problems with artificial intelligence answers are beyond the scope of something that can be solved with a simple software update.

“All large-scale language models, by their very structure, are inherently and hopelessly unreliable narrators.” Grady Boocha prominent computer scientist. At a fundamental level, they’re designed to generate coherent answers, not truthful answers. “So they can’t simply be ‘fixed,'” he said, because fabricating facts is “an unavoidable property of their operation.”

Booch said that at best, companies that use large language models to answer questions can take steps to “prevent the madness,” or they can “deploy a ton of cheap human labor to cover up their worst lies.” But as long as Google and other tech companies use generative AI to answer search queries, he predicted, false answers will continue to happen.

Get caught up in

Summarised stories to keep you up to date

Arvind NarayananA computer science professor at Princeton University, Robert G. Schneider, agreed that “the tendency of large language models to generate inaccurate information is unlikely to be corrected in the near future.” But he also said that Google “is making unavoidable mistakes in its AI summary feature, such as extracting and summarizing results from low-quality web pages and onions.”

With AI Overviews, Google is trying to combat the well-known tendency to make things up by having language models cite and summarize specific sources.

However, there are still various issues that can occur, Melanie Mitchell“One is that the system can’t necessarily tell if a particular source is providing a reliable answer to a question, likely because it can’t understand the context,” said Raymond Schneider, a professor at the Santa Fe Institute who studies complex systems. “The other is that even if you find a good source, you might misinterpret what it’s saying.”

This isn’t just a problem for Google, she says. Other AI tools, like OpenAI’s ChatGPT or Perplexity, might not get the same answers wrong as Google, but they’ll get other answers wrong that Google gets right. “We don’t have an AI yet that can do this in a more reliable way,” Mitchell says.

Still, some parts of the problem may prove more manageable than others.

The problem of “hallucinations,” where a language model creates something that isn’t present in the training data, remains “unsolved.” Nilufer MileshgalaThe postdoctoral researcher in machine learning at the University of Washington added that making sure a system is only getting information from trusted sources is more of a problem with traditional search than it is with generative AI — something that could be partially “fixed” by adding fact-checking mechanisms, she said.

It may also be helpful to make the AI ​​summary less prominent in search results. Osama FayyadExecutive Director of the Experiential AI Lab at Northeastern University.

“I’m not sure if summaries will be ready for prime time,” he said, “which is good news for Web publishers, by the way,” because they mean users still have a reason to visit trusted sites rather than relying on Google for everything.

Mitchell said he expects Google’s responses to improve, but not enough to really be reliable.

“If they say the majority is right, I’d believe them,” Mitchell said, “but their system is used by millions of people every day, so there are going to be cases where they’re horribly wrong and where it’s causing some harm.”

Narayanan said the “easiest way out of this mess” for the company might be to pay human fact-checkers for millions of the most common search queries. “Essentially, Google would become a content farm disguised as a search engine, laundering low-paid human labor with AI-sanctioned content.”

Google CEO Sundar Pichai has also acknowledged the problem.

in Interview with The Verge Pichai said last week that the tendency of large-scale language models to be false is in some ways an “inherent feature” and therefore “not necessarily the best approach to consistently get the facts.”

But incorporating them into search engines can “ground” answers in reality while directing users to the original source, he says. “They’ll still give you the wrong answer sometimes, but I don’t think you can look at that and underestimate how useful it is.”

The Biden administration rejected a plan to make TikTok safer (Drew Harwell)

US court to hear challenge to possible TikTok ban in September (Reuters)

Too small to monitor, too big to ignore: Telegram is the app dividing Europe (Bloomberg News)

OpenAI forms safety committee to begin training new AI models (by Pranshu Verma)

Former OpenAI director says board learned about ChatGPT’s launch on Twitter (Bloomberg News)

Antropic hires former OpenAI safety chief as new team head (TechCrunch)

Google won’t comment on potential massive leak of search algorithm documentation (The Verge)

Google researchers say AI is now a major vector for disinformation (404 Media)

Media executives fight back against AI, and media executives make deals (Laura Wagner, Gerrit de Vink)

Microsoft releases Copilot for Telegram (The Verge)

Samsung union plans first strike as wage negotiations stall (Bloomberg)

AI career coaches are here. Can you trust them? (Daniel Abril)

Elon Musk is feuding with Meta’s chief AI scientist (Gizmodo)

That’s all for today. Thank you for joining us. Please tell others to subscribe to Tech Brief. Contact Cristiano (by email or Social media) and Will (email or Social mediaFor tips, feedback, greetings, etc., please contact us at .