Meta is scrambling to grab some of that ChatGPT and Grok buzz with the launch of its own standalone AI app. Built on its Llama 4 LLM, the assistant touts personalization and smoother voice chats, but the most visible feature is a Discover feed showing off how other users interact with it, and even that feels more like a gimmick than a game-changer.
Unveiled Tuesday at its LlamaCon AI event, Meta AI is now available on iOS and Android. It's a standalone version of the AI assistant already in Meta products, such as Instagram and Facebook, and it lands in a market already teeming with AI chatbots and answer machines - ChatGPT, Gemini, Apple Intelligence, Grok, Perplexity, Claude, and more.
You can type or talk to it, generate text or images, and continue conversations across phones, desktops, and even Ray-Ban Meta glasses. In short, it does most of the same tricks as every other AI assistant.
You'll need an account with the social media company formerly known as Facebook to use the app, naturally. Built on unspecified versions of Llama 4 that Meta declined to elaborate on, the AI assistant draws on your connected Meta profiles to personalize responses, and users can also tell it to remember specific details to fine-tune the experience. Meta told us its goal was to develop an AI that knows about a user's world, not just the world at large.
A Meta AI-generated image of a bust of Mark Zuckerberg in a Hellenic style - Click to enlarge
Another differentiator is the Discover feed, where users can scroll through an endless list of publicly shared prompts and responses from Meta AI. Many are banal, twee, or absurd, and a few seem suspiciously on-brand for Mark Zuckerberg's well-known taste for masculine energy and Mediterranean aesthetics.
Prioritizing speed over quality
With most standalone AI apps offering a familiar mix of chatbot and image-generation features, the real way to judge them is by direct comparison. On that front, Meta AI struggles to impress.
When asked to explain the differences between Asian and African elephants - a prompt your humble vulture uses fairly often to test chatbots - OpenAI's ChatGPT was way more thorough. It explained more differences between the two and broke them down in a clearer manner. Not only that, but it it gave details that Meta AI didn't, like reasons for why African elephants have larger ears (thermoregulation). The GPT-4o-powered ChatGPT also gave weights for both species in both pounds and kilos, and described more details about the species names and specifics, too.
Also, when I fed a Pauly Shore quote into both, "If you're edged 'cause I'm weazin' all your grindage, just chill," ChatGPT told me where it came from and that it was, in fact, a Pauly Shore quote, while Meta just explained the terms. It also got "grindage" wrong, which in the context refers to food, not "an informal term for the effort or work someone puts in, often to achieve something," per Meta.
But it really struggled with images. Most leading image generators today have ironed out the worst of the uncanny valley oddities: deformed hands, warped faces, and flat textures. But Zuckercorp's app still stumbles over those basics.
A decidedly unrealistic picture of people barbecuing generated by Meta AI - Click to enlarge
Take the above picture pulled from the Discover feed, for example. It looks fine at first glance, until you notice some of the mangled hands, faces, and a general flatness that makes it feel like it was churned out by a model from several generations ago.
For a side-by-side comparison, I asked both ChatGPT (via the iOS app) and Meta AI to draw me a photorealistic image of a Buddhist monk with the details (e.g., location, age of the monk, etc) left up to the models. ChatGPT's output is on the left below, while Meta AI's is on the right. Again, that Meta image shows flat rendering: the monk's left hand appears garbled, and the statue of the Buddha in the background looks like a low-resolution video game model that failed to render properly.
AI-generated Buddhist monks created using the same prompt in ChatGPT (L) and Meta AI (R) mobile apps - Click to enlarge
The one thing that Meta AI has going for it is the speed. Whereas GPT-4o took a couple of minutes to spit out the above result, citing server load, Meta AI delivered its image nearly instantly.
When asked about the image quality issues, Meta told us that it's focused on balancing latency with image quality and would continue to tweak that balance in the future.
- Meta debuts its first 'mixture of experts' models from the Llama 4 herd
- Meta gives nod to weaponizing Llama – but only for the good guys
- Stargate, smargate. We're spending $60B+ on AI this year, Meta's Zuckerberg boasts
- Zuck dreams of personalized AI assistants for all – just like email
Overall, Meta has been catching quite a bit of flak over its latest family of LLMs.
Meta reportedly got caught submitting a Llama 4 experimental variant to AI comparison site LMArena that was specially crafted to give it an edge over the competition. As we reported, the variant wasn't intended for public release and LMArena described it as "a customized model to optimize for human preference."
LMArena rankings are based on user scoring of various AI models.
After updating their evaluation policies, LMArena assessed the standard, publicly available version of Llama 4 Maverick, which ranked significantly lower. As of writing, no Meta AI model ranks higher on the leaderboard than 38th place (well, apart from the Nvidia-built Llama-3.3-Nemotron-Super-49B-v1 variant at number 22).
Take a look around the internet and you'll find no shortage of people criticizing the capabilities of the various Llama 4 models, suggesting this could be another case of Meta trying to ride the AI hype wave without actually being ready for the competition.
And speaking of complaints, how could we forget that lately OpenAI’s o3 and o4-mini models hallucinate more than older versions, and GPT-4o is way too sycophantic, requiring a feature rollback. ®
PS: Though some of us have had an AI chat assistant in Meta's WhatsApp for over a year now, it appears to have just expanded to other places, such as Europe, Australia, and the UK, sparking upset across the web this month. It's not possible to turn it off entirely – just don't use it – though there are some mitigations to make it less of a pain.