Meta has released five new artificial intelligence (AI) research models, including one that can generate both text and images, as well as one that can detect AI-generated speech within larger audio snippets.
The model was unveiled by Meta’s on Tuesday (June 18). Basic AI research The company made the announcement on Tuesday. press release.
“By making this research public, we can encourage iteration and ultimately Advancing AI in a responsible way” Mehta said in the release.
One of the new models, Chameleon, is a family of mixed-modal models that can understand and generate both images and text, according to the release. These models can take inputs that contain both text and images and output a combination of text and images. Meta suggested in the release that this functionality could be used to generate captions for images or to create new scenes using both text prompts and images.
It was announced on Tuesday Pre-trained Models for code completion. According to the release, these models were trained using Meta’s new multi-token prediction approach, in which a large language model (LLM) is trained to predict multiple future words at once, rather than the traditional approach of predicting one word at a time.
The third new model, JASCO, gives you more control over AI music generation. Instead of relying primarily on text input to generate music, this new model can also use chords and beatAccording to the release, this feature makes it possible to incorporate both symbols and audio into a single text-to-music generative model.
Another new model, AudioSeal, uses audio watermarking technology that enables localized detection of AI-generated audio, and can pinpoint AI-generated segments within larger audio snippets, according to the release. The model detects AI-generated audio up to 485 times faster than traditional methods.
The fifth new AI research model, launched by Meta’s FAIR team on Tuesday, is designed to increase geographic and cultural diversity in text-to-image generation systems, the release said. For this task, the company released geographic variance evaluation code and annotations to improve the evaluation of text-to-image models.
Meta said in its April earnings report. artificial intelligence and Spending at the metaverse development division, Reality Labs, is expected to be in the range of $35 billion to $40 billion by the end of 2024, $5 billion higher than earlier forecasts.
“we Several A variety of AI services, from AI assistants to augmented reality apps, glasses, and APIs [application programming interfaces] “We’re building everything from things that allow creators to join their communities and interact with their fans to business AI that will eventually be used by all companies on our platform,” Meta said. Mark Zuckerberg The company said this during its quarterly earnings report on April 24.
Subscribe for daily updates on PYMNTS AI. AI Newsletter.