BIP America Latest News

collapse
Home / Daily News Analysis / Threads is adding a Grok-like AI search feature

Threads is adding a Grok-like AI search feature

May 14, 2026  Twila Rosenbaum  7 views
Threads is adding a Grok-like AI search feature

Meta is bringing its AI chatbot to Threads in a way that should feel familiar to anyone who has spent time on X. The company is testing a new feature that gives Meta AI a dedicated Threads account — @meta.ai — that users can tag in posts and replies to add additional context to the discussion. The premise is essentially the same as Grok on X, where tagging the bot to fact-check or contextualize a viral post has become its own genre of reply-guy behavior.

This move marks Meta's latest effort to integrate artificial intelligence into its social media ecosystem. The feature is currently in early beta and rolling out first to users in Malaysia, Saudi Arabia, Mexico, Argentina, and Singapore, according to reports. It is expected to expand to more regions as Meta fine-tunes the functionality and gathers user feedback.

The @meta.ai account is designed to be summoned by users who want additional information, clarification, or even counterpoints to a post or comment. When tagged, the AI will generate a reply that appears publicly under the thread, much like how Grok operates on Elon Musk's X platform. This public visibility is a key differentiator from some other AI tools, such as ChatGPT or Google Gemini, which typically operate in private chat interfaces.

Meta's broader strategy involves embedding its Muse Spark model—a new AI model announced alongside this feature—across its entire family of apps: WhatsApp, Instagram, Facebook, Messenger, and Threads. The model will appear in search bars, group chats, and posts, offering contextual assistance. For example, on WhatsApp, Meta is testing "side chats" that allow users to privately query Meta AI about a group conversation without the response being visible to others—a meaningful privacy distinction from the Threads version.

However, the public nature of the Threads feature raises potential moderation concerns. Grok has faced criticism on X for generating offensive or inappropriate content, including pro-Nazi imagery, sycophantic praise of Elon Musk, and surfacing child abuse material. Meta has historically maintained tighter guardrails on its AI products, but giving any AI chatbot public-facing visibility on a social platform invites similar risks. The company has stated that users can mute the @meta.ai account and hide its replies, providing a layer of control.

Artificial intelligence in social media is not new, but the trend of "answer bots" that users can summon with an @ mention has grown rapidly. Platforms like Reddit and Twitter (now X) have experimented with such features, often leading to mixed results. The primary appeal is instant access to information without leaving the platform, but the downside is the potential for misinformation, bias, and spam. Meta's implementation attempts to mitigate these risks by using its own AI model and moderation systems, though experts warn that no AI is infallible.

The beta launch in select countries—Malaysia, Saudi Arabia, Mexico, Argentina, and Singapore—suggests Meta is carefully testing cultural and linguistic nuances. Each market has different social media norms and language challenges. For instance, Malay, Arabic, Spanish, and English are all used in these countries, and the AI must handle multilingual contexts effectively. Meta has not yet announced a timeline for global rollout, but the inclusion of diverse regions indicates a phased approach.

Meta's AI ambitions extend beyond Threads. The company is also rolling out Meta AI across its core apps, including Facebook, Instagram, and Messenger. Users can ask questions in comments, group chats, or directly in search bars. The goal is to make AI an integral part of the user experience, akin to how Google Assistant or Siri are used on mobile devices. However, unlike those assistants, Meta's AI is deeply integrated into social interactions, which adds complexity.

Privacy advocates have raised concerns about the data collection involved in such AI features. When a user tags @meta.ai, the AI may process the entire post or thread to generate a response, potentially exposing personal information. Meta has emphasized that users can mute the bot, but critics argue that the default public visibility could lead to unintended data exposure. The company has not detailed how long AI-generated replies are stored or how they are used for training.

Another angle is the competitive landscape. X, with its Grok feature, has positioned itself as a platform for real-time AI interaction. Threads, owned by Meta, is now following suit. Both platforms are vying for users who want conversational AI within social media. However, Meta's deeper pockets and existing user base across multiple apps give it an advantage. The integration of AI across Meta's entire ecosystem could create a seamless experience that X cannot match.

The feature also ties into Meta's larger investment in generative AI. In 2023, Meta released Llama 2, an open-source large language model, and has since continued to develop its AI capabilities. The Muse Spark model is likely a derivative of Llama, optimized for real-time social interactions. Meta's AI research division, FAIR, has been at the forefront of AI development, and this product is a direct application of that research.

For content creators and brands, the @meta.ai feature could be a double-edged sword. On one hand, it provides an easy way to add context to posts, potentially increasing engagement. On the other hand, AI-generated replies could undermine original content or introduce factual errors. Meta has not disclosed how the AI's responses are moderated, but it is expected to implement filters for hate speech, explicit content, and misinformation.

The rollout in the current beta markets will likely reveal how users react to AI being woven into public conversations. Early feedback from Malaysia and Saudi Arabia may shape global deployment. If successful, the feature could become a standard part of Threads, further blurring the line between human and machine-generated content. If not, Meta may pull back or redesign it.

In the broader context of social media evolution, this move by Meta signals that AI is no longer a novelty but a core feature. Platforms that fail to integrate AI risk falling behind as users expect instant answers and contextual assistance. However, the risks of AI misuse are significant, and Meta's track record with content moderation will be tested. The coming months will determine whether @meta.ai becomes a useful tool or a source of controversy.


Source: Mashable News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy