Mon. Dec 23rd, 2024
Metaai Chief Yann Lecun Is Skeptical About Agi And Quantum
  • Facebook’s parent company Meta held a media event in San Francisco this week to highlight the 10th anniversary of its Basic AI Research Team.
  • Yann LeCun, Meta’s chief scientist, said society is likely to adopt “cat-level” or “dog-level” AI years before human-level AI.
  • Unlike Google, Microsoft, and other tech giants, Meta isn’t making a big bet on quantum computing.

Yann LeCun, chief AI scientist at Meta, speaks at the Viva Tech conference in Paris on June 13, 2023.

Chesnot | Getty Images News | Getty Images

Yann LeCun, chief scientist at Meta and a deep learning pioneer, believes that today’s AI systems have the common sense to push their abilities beyond simply summarizing piles of text in creative ways. He said he believed it would take decades to reach awareness.

His view contrasts with that of Nvidia CEO Jensen Huang, who recently said that AI will be “fairly competitive” with humans and beat them at many mentally intensive tasks within five years. be.

“I know Jensen,” LeCun said at a recent event marking the 10th anniversary of Facebook’s parent company’s Fundamental AI Research Team. LeCun, his CEO of Nvidia, said he has a lot to gain from the AI ​​trend. “There’s an AI war going on and he’s supplying the weapons.”

”[If] The more AGI you think you have, the more GPUs you need to buy,” LeCun said of engineers trying to develop artificial general intelligence, an AI equivalent to human-level intelligence. The pursuit of AGI will require more Nvidia computer chips.

LeCun said society is likely to adopt “cat-level” or “dog-level” AI years before human-level AI. And the technology industry’s current focus on language models and text data alone will not be enough to create the advanced human-like AI systems that researchers have been dreaming of for decades.

“Text is a very poor source of information,” LeCun said, explaining that it would probably take humans 20,000 years to read the amount of text used to train modern language models. “Even if you train a system with the equivalent of 20,000 years of reading material, it still won’t understand that if A is the same as B, then B is the same as A.”

“There are a lot of really basic things in the world that you can’t get with this kind of training,” LeCun says.

That’s why LeCun and other Meta AI executives are hard at work exploring how the so-called transformer models used to create apps like ChatGPT can be tailored to handle a variety of data, including audio, image, and video information. I’ve been doing it. The idea is that the more these AI systems can discover perhaps billions of hidden correlations between these different types of data, the more likely they will be able to perform greater feats.

Part of Mehta’s research includes software that can help teach people how to play tennis better while wearing the company’s Project Aria augmented reality glasses, which blend digital graphics into the real world. . Executives showed off a demo in which a tennis player wearing AR glasses can see visual cues that teach them how to hold a tennis racket correctly and swing their arms with perfect form. This type of digital tennis The AI ​​model required to power his assistant requires that in addition to text and voice he combines three-dimensional visual data, in case the digital assistant needs to speak. .

These so-called multimodal AI systems represent the next frontier, but their development won’t be cheap. And as more companies, including Meta and Google’s parent company Alphabet, research more advanced AI models, Nvidia could become even more dominant, especially if no other competitors emerge.

Nvidia is the biggest beneficiary of generative AI, with expensive graphics processing units becoming the standard tool used to train large language models. Meta utilized his 16,000 Nvidia A100 GPUs to train the Llama AI software.

CNBC asked whether the tech industry will need more hardware providers as Mehta and other researchers continue to work on developing these kinds of sophisticated AI models.

“It’s not required, but it’s nice to have,” LeCun said, adding that GPU technology remains the gold standard when it comes to AI.

Still, he said, future computer chips may no longer be called GPUs.

“The new chips that we will hopefully see are simply neural deep learning accelerators rather than graphics processing units,” LeCun said.

LeCun is also somewhat skeptical about quantum computing, which tech giants like Microsoft, IBM, and Google are pouring resources into. Many researchers outside of Meta believe that quantum computing machines can be created because they can perform multiple calculations using so-called qubits, as opposed to the traditional binary bits used in modern computing. We believe it can significantly accelerate progress in data-intensive fields such as medicine.

But LeCun has his doubts.

“A number of problems that can be solved with quantum computing can be solved much more efficiently using classical computers,” LeCun said.

“Quantum computing is an interesting scientific topic,” LeCun said. The “practical relevance and possibility of actually producing a useful quantum computer” is less clear.

Mike Schropfer, Meta’s senior fellow and former technology director, agreed, saying that he evaluates quantum technology every few years and says that useful quantum machines “may come out one day, but it’s going to be a very long time coming.” period, so it has nothing to do with what we’re doing.” ”

“The reason we started the AI ​​Lab 10 years ago was because it was very clear that this technology would be commercially viable within the next few years,” Schropfer said.

clock:Meta goes on the defensive after receiving reports of damage on Instagram