What began as a quiet side project has quickly turned into one of the most talked-about experiments in artificial intelligence. Built by Peter Steinberger, the open-source AI agent formerly known as Clawdbot has gone through two rapid rebrands—first to Moltbot and now to OpenClaw—while attracting more than 100,000 GitHub stars and millions of curious users. With that explosive growth has come rising concern from security researchers, enterprises and regulators.
The project initially gained attention for offering something most AI tools still struggle with: the ability to take real actions instead of just responding with text. OpenClaw operates as an “agent,” meaning users can message it through platforms like WhatsApp, Telegram, Slack or Discord and instruct it to perform tasks directly on their computer—opening browsers, clicking buttons, accessing files or running commands. This action-oriented design pushed the tool from novelty to phenomenon almost overnight.
However, the project’s naming journey has been anything but smooth. The original name, Clawdbot, was dropped after a trademark dispute with Anthropic, prompting a brief rebrand to Moltbot. Just days later, Steinberger announced the final name: OpenClaw. Each rename expanded the project’s visibility—but also introduced confusion that scammers and malicious actors were quick to exploit.
Security researchers soon began reporting exposed OpenClaw dashboards on the public internet, some leaking chat logs, API keys and even remote command execution capabilities. In many cases, these systems were not hacked; they were simply misconfigured. Because OpenClaw often requires deep system permissions—sometimes equivalent to administrator or “sudo” access—mistakes can have serious consequences.
The rapid rebrands also created fertile ground for scams. Typosquat domains, cloned GitHub repositories and fake crypto tokens appeared almost immediately, capitalizing on user confusion around the changing names. These attacks relied less on software vulnerabilities and more on speed, hype and trust—classic ingredients for supply-chain style exploits.
Experts warn that agentic AI tools magnify common security mistakes. A misconfigured web app might leak data, but a misconfigured AI agent can leak data and act on it. With access to emails, calendars, browsers and system commands, an agent can become dangerous if manipulated through prompt injection or poisoned inputs—risks already highlighted by OWASP and other security bodies.
Inside enterprises, the situation is even more concerning. Security firms report widespread “shadow IT,” with employees installing OpenClaw variants and granting privileged access without formal approval. In some organizations, more than half of users tested had already connected the agent to sensitive systems, leaving security teams scrambling to catch up.
To his credit, Steinberger has responded by improving documentation, adding automated security checks and committing dozens of security-focused updates. Still, the complexity of installation—despite being marketed as a one-line command—means many users take shortcuts, increasing the likelihood of insecure setups.
Ultimately, OpenClaw is less a finished product than a glimpse into the future of AI. It shows how messaging interfaces may become the universal control layer for digital work—and how security challenges will shift from traditional malware to permissions, identity and trust. For developers and security professionals, it can be a powerful experiment when isolated and handled carefully. For everyday users, it remains an early, risky preview of what’s coming next.
As Steinberger put it, “the lobster has molted into its final form.” But in the fast-moving world of AI, even final forms are rarely final for long.
Continue Reading
This is a summary. Read the full story on the original publication.
Read Full Article