Mon. Dec 23rd, 2024
Google Cites 'data Gaps' And 'fake Screenshots' For Ai Overview

A week after hundreds of users reported issues with Google’s new AI Summary feature for Search, the tech giant has clarified what went wrong. Google cited “missing data” and edge cases for the poor quality of AI-generated search results. A few days after Google rolled out its AI Summary feature last week, many users reported seeing inconsistent AI-generated summaries for their search queries.

The feature, currently only accessible to US users, has been making waves after it showed strange and irrelevant AI-generated summaries. Google says the main purpose of the feature is to provide a better search experience. However, the AI ​​produced some strange results. Google quickly acknowledged the issue and quickly removed the inaccurate AI-generated results.

Liz Reid, vice president and head of Google Search, argued in a blog post that the inaccurate results and the large number of fake screenshots that were widely shared were due to missing data. Reid said that while the AI ​​Overview doesn’t usually hallucinate, it can sometimes misinterpret what’s already on the web.

Reid said in a blog post that the tech giant tested the feature thoroughly before releasing it. “This included thorough red teaming efforts, evaluation with a sample of representative user queries, and testing performance on a portion of our search traffic. However, nothing beats millions of people using the feature with many new searches. We also saw nonsensical new searches that appeared to be designed to produce false results,” the post read.

What went wrong?

Reid cited a number of widely shared fake screenshots on topics such as leaving dogs in cars, smoking while pregnant and depression, and urged users who came across them to search and check for themselves. But he acknowledged that some searches had turned up that were strange, inaccurate or unhelpful. “These were generally searches that people wouldn’t normally make, but they highlighted certain areas where we needed to improve.”

Celebration Offer

The blog also notes where the feature falls short: AI Overviews can’t interpret gibberish queries or satirical content, such as “How many rocks should I eat?” – a question no one had asked before the screenshot went viral.

According to Reid, there isn’t much content that really considers that question. This is called a data void or information gap, and it means there is limited quality content on a particular topic. In this particular case, there was satirical content on the topic that was linked to by AI Overview. In other instances, AI Overview featured sarcastic or trollish content from discussion forums.

While Google considers forums to be a great source of reliable, first-hand information, they sometimes offer useless or bizarre advice, such as using glue to stick cheese to pizza. Additionally, the AI ​​overview also showed examples of misinterpreting language on web pages.

AI Overview Improvements

Google said that based on examples from last week, it was able to identify patterns where AI Overview did not work properly. The company said it has made more than a dozen technical improvements to the system. These improvements include improving mechanisms for detecting nonsensical queries, updating systems to limit the use of user-generated content in responses, adding trigger limits for queries where AI Overview would not be helpful, and putting in place stronger guardrails for topics like news and health.

Beyond these improvements, Reid said Google is closely monitoring feedback and external reports, and taking action against a small number of AI summaries that violate its content policies: “We’ll continue to improve when and how we show AI summaries to add extra protections, including for edge cases. We’re incredibly grateful for your ongoing feedback.”