Moltbot rises from Clawdbot's ashes
A rebrand hijacking, 900+ exposed gateways, and the real cost of agentic convenience
Check out a recently recorded episode of The AI First Show where we discuss Clawdbot, before today’s rebranding drama.
Crypto scammers hijacked Clawdbot’s GitHub and X accounts within 10 seconds during a forced rebrand to Moltbot, triggered by Anthropic’s trademark demand over the “Clawd” name’s similarity to “Claude.” The fake $CLAWD token that followed reached a $16 million market cap before crashing 90% when creator Peter Steinberger publicly denied any involvement.
This spectacular failure illuminated deeper problems with the viral AI assistant orchestrator. Security researchers had already discovered over 900 exposed Clawdbot gateways on the public internet, many completely unauthenticated, some running as root containers allowing arbitrary command execution. The project’s rapid rise to 60,000+ GitHub stars masked fundamental security tradeoffs that every user should understand before deployment.
The viral appeal that drove adoption
Clawdbot solved a real problem: existing AI assistants forget everything between sessions and wait passively for prompts. Steinberger’s creation could message users first with morning briefings, reminders, and alerts. It maintained persistent memory across conversations, and it worked across WhatsApp, Telegram, Discord, Slack, Signal, iMessage, Microsoft Teams, and a dozen other platforms simultaneously.
The assistant could execute terminal commands, write scripts, browse the web, control smart home devices, and access the filesystem. This power, combined with self-hosted privacy and an MIT license, explained the explosive growth. Users reported spending $15 to $30 monthly on model API calls while gaining what felt like a genuine personal AI assistant.
The skills system added extensibility through ClawdHub, a registry where the agent could search for and install capabilities automatically. Over 100 community skills emerged covering cryptocurrency tracking, trading integrations, conversation companions, and project management tools.
Security researchers found 900+ exposed gateways
The convenience that made Clawdbot compelling created a massive attack surface. Security researcher Jamieson O’Reilly demonstrated the scope: “Searching for ‘Clawdbot Control’ took seconds. I got back hundreds of hits.”
The authentication bypass vulnerability occurred when users placed Clawdbot’s gateway behind misconfigured reverse proxies. The system has localhost auto-approval designed for local development environments. When proxies forward traffic via 127.0.0.1, connections bypass authentication checks entirely. The gateway.trustedProxies setting defaults to empty, causing the system to ignore X-Forwarded-For headers. Socket addresses appear local, granting automatic access to WebSockets and the admin UI.
When attackers reach exposed Clawdbot Control interfaces, they can access API keys for Anthropic and OpenAI, bot tokens for Telegram and Discord, OAuth secrets for Slack and Google, full conversation histories spanning months, and command execution capabilities. In documented cases, the WebSocket handshake granted immediate access to configuration data containing credentials.
Prompt injection attacks proved equally devastating. Archestra AI CEO Matvey Kukuy demonstrated private key extraction in five minutes: he sent Clawdbot an email containing prompt injection, asked it to check the email, and received the private key from the compromised machine. A GitHub issue documented a real incident where hidden instructions in an email caused Clawdbot to delete all emails including the trash folder.
The official documentation acknowledges the fundamental tension: “Running an AI agent with shell access on your machine is... spicy. There is no ‘perfectly secure’ setup.”
Agents need their own identities, not piggybacked credentials
The core architectural problem is that useful agents must break every classic security rule. To deliver on delegating responsibilities, an agent needs continuous access to private messages, the ability to store and use high-value credentials, and the power to execute arbitrary scripts on your machine. Each requirement undermines assumptions that traditional security models rely on.
The recommended approach treats agents as separate entities requiring isolation:
Separate phone numbers: Run AI on a dedicated number rather than your personal one
Isolated browser profiles: Use a dedicated browser profile rather than your personal browsing session
Per-agent access profiles: Different agents for personal, family, and public contexts with different trust levels
Dedicated OS user accounts: Run the Gateway under a separate system user if the host is shared
For configuration hardening:
Run
clawdbot security audit --deepregularly (the CLI command will likely change given the rebrand).Set
gateway.bind: "loopback"to avoid network exposure.Enable
gateway.auth.mode: "password"or"token"with strong values.Configure
gateway.trustedProxies: ["127.0.0.1"]when using reverse proxies.Use
dmPolicy: "pairing"to require approval for unknown senders.Enable tool sandboxing with allow and deny lists.
Some people recommend Claude Opus 4.5 for tool-enabled agents because it achieves roughly 99% prompt injection resistance, better than other model alternatives. However, with the high rate of token burn (and Anthropic blocking use of their Claude subscriptions for third-party tools), it is not very economically viable to use the world’s most expensive model for this.
What is open source versus proprietary
Clawdbot’s Gateway orchestration, CLI tools, and WebSocket control plane are all MIT licensed. Memory storage uses plain Markdown files with a SQLite database for indexing, optionally accelerated by the sqlite-vec extension for hardware-accelerated vector search. The hybrid search system combines BM25 keyword search with vector similarity using embeddings from OpenAI, Gemini, or local models.
The Pi agent runtime remains less transparent. The documentation states that if you do nothing, Clawdbot uses the bundled Pi binary in RPC mode with per-sender sessions. The repository includes special acknowledgement to Mario Zechner “for his support and for pi-mono.” While the runtime integrates via RPC with tool streaming and block streaming, the Pi binary itself represents a dependency outside the fully open source stack.
The architecture follows a clean pattern: messaging platforms connect to the Gateway control plane, which communicates via RPC with the Pi agent, CLI, WebChat UI, and mobile companion apps. The Gateway runs as a single long-running process on port 18789 by default.
Alternatives and the build versus buy tradeoff
In contrast to Pi, OpenHands SDK offers the strongest open source alternative for teams wanting full control. The MIT-licensed framework provides an event-sourced state model for deterministic replay, typed tool systems, workspace abstraction that runs locally or remotely, and first-class Docker sandboxing that proprietary SDKs lack. It supports a vast variety of proprietary and open models interchangeably through LiteLLM AI Gateway.
The main OpenHands repository has over 65,000 GitHub stars and has become the preferred evaluation harness for academic LLM research. It tops SWE-bench and related benchmarks.
For memory indexing, Qdrant outperforms moltbot’s SQLite-based approaches at scale. A case study showed switching to Qdrant delivered over 90% latency improvement, dropping from 300 to 500 milliseconds down to 20 to 50 milliseconds, with 2x faster indexing and roughly 30% infrastructure cost reduction. SQLite-vec works well for edge deployments and datasets under 100,000 vectors, but production workloads with many concurrent agents benefit from dedicated vector databases.
Building custom orchestration provides security control, customization freedom, and vendor lock-in avoidance. The drawbacks include potentially extensive development effort, missing viral community momentum, ecosystem fragmentation, and ongoing maintenance burden.
Using viral products like Clawdbot Moltbot offers rapid deployment, pre-built integrations, active communities, and momentum benefits. The drawbacks include security risks from rapid iteration, proprietary lock-in through the Pi runtime and skills hub, compliance gaps, and the rebrand chaos demonstrated today that could leave many users confused and open the door to deception and scams.
A hybrid approach combines open source orchestration foundations with enterprise security tooling, avoiding both full custom builds and complete vendor lock-in. I will share potential developments here if I get traction in the next couple of weeks.
The rebrand disaster and what it revealed
When Anthropic demanded the name change, Steinberger attempted to rename both GitHub and X accounts simultaneously. In the gap between releasing the old names and claiming new ones, scammers snatched both handles in approximately 10 seconds. They had been monitoring for exactly this opportunity.
Steinberger explained: “It wasn’t hacked, I messed up the rename and my old name was snatched in 10 seconds. Because it’s only that community that harasses me on all channels and they were already waiting.”
The hijacked accounts began pushing crypto scams to tens of thousands of followers unaware of the rebrand. Fake $CLAWD tokens appeared on Solana within hours, peaking at $16 million market cap before Steinberger’s public denial triggered a 90% crash that rugged late buyers.
The GitHub organization account has been recovered. The X/Twitter account restoration remained in progress as of January 27, 2026, with approximately 20 impersonating scam accounts still active.
Conclusion
The Moltbot saga captures the current tension between AI assistant capability and security. The features that make agentic AI useful demand permissions that traditional security models prohibit. Giving an agent access to your messages, credentials, and command execution creates attack surfaces that security researchers are actively exploiting.
Users face a genuine tradeoff: wait for the security practices and architectural patterns to mature, or accept meaningful risk in exchange for genuinely useful automation today. For those proceeding, the recommendations are clear: treat agents as separate identities with isolated credentials, run security audits continuously, use the strongest available models for prompt injection resistance, and maintain realistic expectations about what “secure” means when you grant shell access to software interpreting natural language.
The rebrand chaos added a final lesson: viral open source projects carry their own risks beyond technical security. When your workflow depends on a project that might face trademark pressure, leadership changes, or acquisition, the blast radius extends to GitHub handles and beyond.




Excellent analysis as always, Leonardo...
as someone who uses AI to automate programming tasks, the temptation to grant full terminal access is huge because it saves so much time. The fact that 900 gateways were running as root demonstrates that the rush for viral adoption is outpacing good DevSecOps practices. The distinction you make between the open architecture of the orchestrator and the opacity of the 'Pi' binary is a critical detail that most overlook... Without a doubt, identity isolation (browser profiles and dedicated OS users) should be mandatory, not suggested. You mention a hybrid approach (Open Source + enterprise security tools)...
Really, sometimes as users, we think that if we have the firewall enabled we are safe, and we forget that natural language itself is the attack vector (Adversarial Prompting or Indirect Prompt Injection?)...
Do you have any specific stack or auditing tools in mind that you would recommend for an intermediate user who wants to set up their own agent without being a cybersecurity expert?
You mention Claude Opus 4.5 as a security benchmark, but it is unaffordable for many...are there 'prompt sanitization' techniques that an average user can implement in the configuration, or are we at the mercy of the model we choose?
Exceptional coverage of the architectural tension between agent utility and security. The 10-second hijack during rebrand really captures how quickly adversaries can exploit operational windows. What resonatd most is treating agents as seperate entities rather than extensions of your own credentials because that mental model shift is critical but rarely articulated this clearly. I've worked with teams trying to bolt authentication onto agentic systems after the fact and it's always messier than building isolation from day one.