CONNECT WITH US

Meta tests proactive chat AI to deepen user engagement

Ollie Chang, Taipei; Willis Ke, DIGITIMES Asia 0

Credit: AFP

Meta is reportedly working on a new artificial intelligence (AI) chatbot capable of proactively engaging with users by reviewing past conversations and initiating messages. The goal is to enhance user interaction and increase platform stickiness.

According to internal documents obtained by Business Insider, Meta is partnering with data labeling firm Alignerr on a project codenamed "Omni." The initiative focuses on training AI agents to deliver personalized responses and sustain ongoing dialogue based on users' previous interactions. These agents are created using the Meta AI Studio platform and can be used privately or showcased publicly on Instagram and Facebook pages.

Personalized and context-aware conversations

A data labeler involved in the project described it as a long-term initiative centered on personalization and contextual awareness. Meta reportedly uses internal tools to assess and refine AI-generated responses to ensure they align with the conversational context, character settings, and Meta's content policies, while also avoiding sensitive or inappropriate topics.

A Meta spokesperson indirectly confirmed the initiative, stating that the feature is currently undergoing testing. To prevent unwanted interruptions, Meta has set frequency limits on initiating messages: only users who have interacted with a chatbot more than five times within a 14-day period will receive a single follow-up message during that timeframe. If the user does not respond, the AI will cease messaging.

Loneliness, commercial potential, and ethical risks

Meta CEO Mark Zuckerberg has previously commented on the role of AI agents, noting in a podcast that the average American has fewer than three close friends, and AI agents could help address the rising social isolation of modern people.

Analysts cited by SiliconANGLE Media believe that, if Meta can carefully manage the frequency, quality, and contextual sensitivity of these AI interactions, the project could yield strong commercial returns. However, missteps could trigger backlash from users who feel pestered or manipulated.

Proactive AI chat is not a new concept. AI startup Character.AI offers a similar functionality, enabling AI to adopt specific personas and interact frequently with users. However, this feature carries the risk of users developing emotional attachments to the AI. Notably, Character.AI is currently facing a lawsuit alleging that its technology contributed to a teenager's suicide in the US.

Article edited by Jack Wu