Meta has announced Lama 4, the latest collection of its AI models, which now powers Meta AI Assistant on the web and WhatsApp, Messenger and Instagram. There are also two new models available for downloading a meta or throat face, Lama 4 Scouts-a small model capable of “fitting in the same NVIDIA H100 GPU”. Meta says it is still in the process of training Lama 4 Behimot, which Mark Zuckerberg, CEO of Meta, says, “is the world’s most performing base model.”
According to Meta, Lalma 4 scout has a 10 million token context window-a AI model’s working memory-and Google’s Gema 3 and Gemini 2.0 Flash Light Model, as well as open source wrong 3.1, “in a widely reported benchmark,” in a wide range of NVIDia H100 GPU. “Meta has made similar claims about the performance of her bigger model than the GPT -4O and Google’s Google’s Gemini 2.0 Flash, and says its results are compared to Dipic -V 3, using” less than half -active parameters “in coding and reasoning tasks.
Meanwhile, Lama 4 Behimot has 288 billion active parameters with a total of 2 trillion parameters. Although it has not yet been released, Meta says Behmot can overtake its rivals (in this case GPT -4.5 and Claude Swant 3.7) “Several stem benchmarks on the mark.”
For Lalama 4, Meta says it turned into a “mixture of experts” (MOE) architecture, a point that protects resources using only parts of a model that requires a work. The company plans to discuss future plans for AI models and products at its Lamakone Conference, which is taking place on April 29.
As its past models, the collection of Meta Lama 4 is called “Open Source”, though Lama has been criticized for licensing restrictions. For example, for Lama 4 licenses, commercial companies need to request permission from Meta before using their models with more than 700 million monthly active users, which was written by the Open Source Initiative in 2023, “takes out of the category of open source.”
