Sun. Dec 22nd, 2024
Gemini's Data Analytics Capabilities Aren't As Good As Google Claims

One of the selling points of Google’s flagship generative AI models, Gemini 1.5 Pro and 1.5 Flash, is the amount of data they can allegedly process and analyze. During press conferences and demos, Google has repeatedly claimed that these models can achieve previously impossible tasks, like summarizing multiple documents spanning hundreds of pages or searching for scenes in movies, thanks to “long context.”

But new research suggests that the model isn’t actually very good at those things.

two another the study We investigated how well Google’s Gemini model and other models could interpret huge amounts of data (such as works the length of “War and Peace”) and found that both Gemini 1.5 Pro and 1.5 Flash struggled to correctly answer questions on large data sets: In a series of document-based tests, the models got the answers right only 40% to 50% of the time.

“While models like Gemini 1.5 Pro can technically handle long contexts, we have seen numerous examples that show the models don’t actually ‘understand’ the content,” Marzena Karpinska, a postdoctoral researcher at the University of Massachusetts Amherst and co-author on one of the studies, told TechCrunch.

Gemini’s context window is missing

A model’s context, or context window, refers to the input data (e.g., text) that the model considers before generating an output (e.g., additional text). A simple question like “Who won the 2020 US Presidential election?” can act as context, as can a movie script, show, or audio clip. The larger the context window, the larger the size of the document that can fit in it.

The latest version of Gemini can ingest more than 2 million tokens as context. (“Tokens” are bits of raw data, like the syllables “fan,” “tas,” and “tic” in the word “fantastic.”) That’s the equivalent of about 1.4 million words, 2 hours of video, or 22 hours of audio — the most context of any model on the market.

During a briefing earlier this year, Google showed off a few pre-recorded demos to showcase the potential of Gemini’s long-context features, including one in which Gemini 1.5 Pro searches through the Apollo 11 moon landing television recording (about 402 pages long) for humorous quotations and finds scenes from the broadcast that resemble a pencil sketch.

Oriol Viñals, vice president of research at Google DeepMind, who led the briefing, described the model as “magical.”

“[1.5 Pro] “It does these kinds of inference tasks across every page, across every word,” he said.

That may have been an exaggeration.

In one of the studies that benchmarked these capabilities, Karpinska, along with researchers from the Allen Institute for AI and Princeton University, asked the model to evaluate true and false statements about fiction books written in English. The researchers chose recent works so that the model couldn’t “cheat” by relying on prior knowledge, and they peppered the statements with specific details and plot references that would be difficult to understand without reading the entire book.

If there was a statement like “Using his skills as an Apos, Nusis is able to reverse engineer the type of portal that is opened by the reagent key found in Rona’s crate,” Gemini 1.5 Pros and 1.5 Flashes who had ingested the relevant book were required to state whether the statement was true or false and explain why.

Image credits: University of Massachusetts Amherst

Testing with a single book of about 260,000 words (about 520 pages), the researchers found that 1.5 Pro answered true/false questions correctly 46.7% of the time, while Flash answered them correctly only 20% of the time. This means that Coin is significantly more accurate at answering questions about books than Google’s latest machine learning models. Averaging across all benchmark results, neither model was able to beat random chance in question-answering accuracy.

“We found that the model had a harder time verifying claims that required consideration of large parts of the book, or even the entire book, compared to claims that could be resolved by obtaining text-level evidence,” Karpinska said. “Qualitatively, we also observed that the model struggled to verify claims about implicit information that was obvious to a human reader but not explicitly stated in the text.”

The second of the two studies, co-authored by researchers at the University of California, Santa Barbara, tested Gemini 1.5 Flash (but not 1.5 Pro)’s ability to “infer” about videos — that is, to search the videos and answer questions about their content.

The co-authors created a dataset that combined images (such as a photo of a birthday cake) with questions for the model to answer about objects depicted in the image (such as “What cartoon character is on this cake?”). To evaluate the model, they randomly chose one image and inserted “distracting” images before and after it to create a slideshow-like video.

Flash’s performance wasn’t much better: In tests where the model was asked to transcribe six handwritten digits from a 25-image “slideshow,” Flash correctly recognized about 50 percent of the transcriptions, dropping to about 30 percent accuracy at eight digits.

“The real-world question-answering task on images seems particularly difficult for all the models we tested,” Michael Saxon, a PhD student at the University of California, Santa Barbara, and one of the study’s co-authors, told TechCrunch. “It may be that the subtle inference required to recognize that there are numbers in the frame and read them is what breaks the models.”

Google is overpromising with Gemini

Neither study was peer reviewed, nor did they look at Gemini 1.5 Pro and 1.5 Flash releases in a 2 million token context (both tested 1 million token context releases), and Flash is not as good as Pro in terms of performance, which Google is promoting as a lower-cost alternative.

Still, both add fuel to the fires of Google overpromising early on with Gemini and then falling short of expectations. None of the models the researchers tested, including OpenAI’s GPT-4o and Anthropic’s Claude 3.5 Sonnet, performed well. But Google is the only model provider that promotes the context window as a top priority in its ads.

“There’s nothing wrong with a simple claim that, based on objective technical details, ‘our model can process X number of tokens,'” Saxon says, “but the question is, what useful thing can you do with it?”

Generative AI in general has come under increased scrutiny as companies (and investors) grow frustrated with the technology’s limitations.

In two recent surveys by the Boston Consulting Group, roughly half of respondents (all C-level executives) said they don’t expect generative AI to significantly improve productivity and are concerned that tools powered by generative AI could lead to mistakes and data leaks. report For the second consecutive quarter, early-stage generative AI deals declined, plummeting 76% from their peak in Q3 2023.

Faced with meeting-summary chatbots that conjure up fictitious details about people and AI search platforms that are essentially plagiarism generators, customers are searching for promising differentiators. Google, which has been in a sometimes clumsy race to catch up with generative AI rivals, has been desperate to make Gemini’s context one of those differentiators.

But it appears the gamble was premature.

“There’s still no set way to actually show that ‘reasoning’ or ‘understanding’ of long documents is happening, and essentially each group putting out these models is making these claims based on their own ad-hoc assessments,” Karpinska said. “Since we don’t know how long contextual processing has been implemented, and the companies don’t share these details, it’s hard to judge how realistic these claims are.”

Google did not respond to a request for comment.

Both Saxon and Karpinska believe the antidote to the hype around generative AI is better benchmarking and, likewise, more emphasis on third-party critique. Saxon points out that one common test of long context — the “needle in a haystack” that Google cites frequently in its marketing materials — only measures a model’s ability to retrieve specific information, like a name or number, from a dataset, but not to answer complex questions about that information.

“All scientists and most engineers who use these models fundamentally agree that the existing benchmarking culture is broken,” Saxon says, “so it’s important for the public to understand that they should take these huge reports with a pinch of salt, with numbers like ‘benchmark overall general intelligence.'”