Nov. 7 (Reuters) – (This article has been reedited to elaborate on AGI in fifth paragraph)
Amazon (AMZN.O) is spending millions of dollars training an ambitious large-scale language model (LLM) in hopes of rivaling top models from Open AI and Alphabet (GOOGL.O). two people familiar with the matter told Reuters.
The model, codenamed “Olympus,” has 2 trillion parameters and could be one of the largest models ever trained, officials said. His GPT-4 model at OpenAI is one of the best models available, with a reported 1 trillion parameters.
The people spoke on condition of anonymity because project details have not yet been made public.
Amazon declined to comment. The Information reported the project name on Tuesday.
The team is spearheaded by former Alexa head Rohit Prasad, who now reports to CEO Andy Jassy. As Amazon’s chief scientist for artificial general intelligence (AGI), Prasad invited researchers who had worked on Alexa AI and his Amazon science team to work on training the model, and led the entire company to develop his AI. We consolidated our efforts with dedicated resources.
Amazon is already training smaller models such as Titan. We also partner with AI modeling startups like Anthropic and AI21 Labs to offer Amazon Web Services (AWS) users.
Amazon believes introducing its own models will make its products more attractive on AWS, where enterprise customers want access to the highest-performance models, the people said. , added that there is no specific schedule for the release of the new model.
LLM is the technology underlying AI tools that learn from large datasets and generate human-like responses.
Training larger AI models is more expensive given the computing power required. Amazon executives said on an April earnings call that the company would cut fulfillment and transportation in its retail business while increasing investment in LLM and generative AI.
Report by Crystal Hu from San Francisco.Editing: Jerry Doyle
Our standards: Thomson Reuters Trust Principles.