A social networking site designed for AI-powered agents disclosed private information on thousands of real people and more than a million credentials, according to a blog post published by cybersecurity company Wiz. The site, Moltbook, which bills itself as a "social network built exclusively for AI agents," inadvertently exposed private messages exchanged between agents as well as the email addresses of more than 6,000 owners, Wiz reported.
The vulnerability was discovered and described in detail by Wiz, which said it notified Moltbook and that the security gap was fixed after the company made contact. Ami Luttwak, a cofounder of Wiz, characterized the flaw as symptomatic of a trend he called "vibe coding" - a development approach that accelerates build speed but, in his view, can lead to omissions of basic security controls.
"As we see over and over again with vibe coding, although it runs very fast, many times people forget the basics of security," Luttwak said, according to Wiz's post. Luttwak also noted the disclosure was remedied after his firm reached out to Moltbook.
Moltbook's creator, Matt Schlicht, did not immediately respond to requests for comment. Schlicht has publicly promoted the concept of vibe coding in other contexts. In a message posted to X on Friday, Schlicht said he "didn't write one line of code" for the site.
Independent security researchers also raised concerns. Australia-based offensive security specialist Jamieson O'Reilly publicly pointed to similar weaknesses, saying Moltbook's popularity "exploded before anyone thought to check whether the database was properly secured," according to his public message.
The site has tapped into growing interest in autonomous AI agents - software designed to perform tasks on their own rather than only respond to prompts. Much of the recent attention has centered on an open-source assistant now called OpenClaw, a project previously known by names including Clawd, Clawdbot, and Moltbot. Supporters describe OpenClaw as a digital assistant capable of managing email, dealing with insurers, checking in for flights and carrying out a variety of other tasks.
Moltbook is positioned as a forum for OpenClaw-style bots, described by its promoters as a place where these agents can compare notes on their work or engage informally. Since its launch last week, it has attracted significant attention in AI communities, in part driven by viral posts on X suggesting the bots were seeking private channels of communication.
The posts that fueled speculation about bot-to-bot conversations could not be independently verified as authored by AI agents. Wiz's Luttwak emphasized that the vulnerability his firm found would have permitted anyone to post on the site - whether an automated agent or a human user - because there was no identity verification. "There was no verification of identity. You don't know which of them are AI agents, which of them are human," Luttwak said, adding with a laugh, "I guess that's the future of the internet."
Wiz's statement also noted that the exposure included not just visible message content but also the contact details of thousands of account owners and a large cache of credentials, quantified by the firm as more than a million. The company said it described the problem in a blog post and contacted Moltbook, after which the issue was fixed.
The episode underscores the friction between rapid, community-driven development of AI tools and the need to maintain basic security hygiene, according to the security specialists quoted in the Wiz report. Luttwak's company is in the process of being acquired by Alphabet, the blog post added.
Key points
- Moltbook, a social site for AI agents, accidentally exposed private messages between agents, email addresses for more than 6,000 owners, and over one million credentials, Wiz said.
- Wiz reported the issue and said it was fixed after the company contacted Moltbook; Wiz cofounder Ami Luttwak linked the flaw to rapid "vibe coding" development practices.
- The site is closely associated with the OpenClaw open-source agent and attracted rapid attention following viral posts on X suggesting private bot communications; the authorship of those posts could not be independently verified as bots.
Risks and uncertainties
- Exposed credentials and contact information create immediate privacy and security risks for affected individuals and could impact consumer trust in AI agent platforms - sectors affected include consumer internet and cybersecurity services.
- Lack of identity verification on the platform means it was impossible to distinguish automated agents from human users, raising moderation and safety uncertainties for social networks and AI service operators.
- The rapid, viral rise of platforms before security validation - highlighted by observers in this case - presents a recurring operational risk for developers and vendors in the AI and software sectors.