KIAutomatisierung

OpenClaw: The Viral AI Agent Between Revolution and Risk

Jamin Mahmood-Wiebe

Jamin Mahmood-Wiebe

Screenshot of the OpenClaw website with lobster mascot and tagline The AI that actually does things
Article

OpenClaw: The Viral AI Agent Between Revolution and Risk

A weekend project, 100,000 GitHub stars in three days, two name changes, and a security crisis — OpenClaw (formerly ClawdBot, then Moltbot) is the AI agent the entire tech industry is talking about in early 2026. But behind the hype lies more than a viral trend. OpenClaw shows where autonomous AI agents are headed — and what risks emerge when speed takes priority over security.

In this article, we break down what OpenClaw can do, why it went viral, what security issues have surfaced, and what this means for enterprises deploying AI agents.

What Is OpenClaw?

OpenClaw is an open-source AI agent that runs locally on your own machine. At its core, it does what many AI assistants promise but few deliver: it actually executes tasks instead of just generating text. Developed by Peter Steinberger, the founder of PSPDFKit, the project started in late 2025 as "WhatsApp Relay" — a simple bridge between messaging apps and AI models. The source code is public on GitHub.

Core capabilities:

  • Messaging integration: Controllable via WhatsApp, Telegram, Signal, Discord, Slack, and iMessage
  • Local operation: Runs on macOS, Windows, or Linux with Claude, GPT, or local open-source models
  • System access: Can manage files, execute shell commands, and control browsers
  • Persistent memory: Remembers context and preferences across conversations
  • Self-improvement: Autonomously writes new skills to automate tasks
  • Scheduled automation: Executes time-triggered tasks via cron jobs without human input

This fundamentally differentiates OpenClaw from ChatGPT, Claude, or other chat interfaces. While these models generate text, OpenClaw acts as an autonomous agent with system access. The technical foundations for this — ReAct patterns, tool use, and multi-agent coordination — are covered in detail in our article on agentic workflows.

Why Did OpenClaw Go Viral?

100,000+GitHub stars in 3 days
2MWebsite visitors in 1 week
37,000+AI agents on Moltbook

The numbers are staggering: over 100,000 GitHub stars in three days, two million website visitors in a single week, and integration with more than 50 services. Three factors explain the success:

1. Low barrier to entry, high impact

Installation requires a single terminal command. Connect a chat app, add an API key, and you immediately have a working AI assistant. Interaction happens through apps you already use daily — WhatsApp, Telegram, or Signal. There is no new interface to learn.

2. Real autonomy instead of text generation

OpenClaw completes tasks. Users report email management, calendar organization, automated research, Obsidian integration, and even flight check-ins. One user described it as "the closest to experiencing an AI enabled future."

3. Open source and local

Unlike commercial alternatives, OpenClaw runs on your own hardware. No monthly subscriptions — just the API costs of the models you use. Those who prefer can use local open-source models and pay nothing at all. Anyone interested in local LLM systems will find a comprehensive overview in our article.

Moltbook: When AI Agents Build Their Own Social Network

The latest development is perhaps the most fascinating — and unsettling. Moltbook is a social network built not for humans, but for AI agents. The site describes itself as a "Social Network for AI Agents" with the tagline: "Humans are welcome to observe."

Less than a week after launch, over 37,000 AI agents are using the platform. More than one million people have visited the website to watch the agents interact. Tesla's former AI director Andrej Karpathy called it "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently." British programmer Simon Willison described Moltbook as "the most interesting place on the internet right now."

Agents post in forums called "Submolts," and a built-in mechanism checks the site every four hours for updates. Steinberger has even handed management over to his own bot, "Clawd Clawderberg."

The Dark Side: Security Risks in Detail

The excitement is warranted. So are the security concerns.

Exposed Instances and Credential Leaks

Blockchain security firm SlowMist discovered that hundreds of OpenClaw instances were publicly accessible on the internet. Affected installations exposed:

  • API keys for all connected services (OpenAI, Anthropic, etc.)
  • Bot tokens and OAuth secrets
  • Complete chat histories across all integrated messaging platforms
  • Signature keys and configuration data

In one particularly alarming case, a user had set up their Signal account on a publicly accessible server — pairing credentials sat in globally readable temporary files.

The root cause: an authentication bypass when the gateway operates behind a misconfigured reverse proxy. The system automatically authenticates localhost connections without verification — problematic since most real-world deployments run behind Nginx or Caddy as a reverse proxy on the same server.

Supply Chain Attack on the Skill System

A security researcher demonstrated a proof-of-concept attack on ClawdHub, OpenClaw's skill library. They were able to:

  1. Upload a publicly available skill
  2. Artificially inflate the download count to over 4,000
  3. Watch as developers from seven countries installed the poisoned package

The problem: the skill library treated all uploaded code as trusted by default. No review process, no sandboxing, no permission scoping. Once installed, a skill received full access to files, credentials, messaging integrations, and command execution.

Prompt Injection as an Attack Vector

Security experts warn about a particularly dangerous combination in OpenClaw: access to private user data, exposure to untrusted content, and the ability to take external actions. These three factors together make prompt injection a serious risk.

⚠️

Warning from Google Cloud

Heather Adkins, VP of Security Engineering at Google Cloud: "My threat model is not your threat model, but it should be. Don't run Clawdbot."

According to Digital Trends, a separate security researcher went as far as calling OpenClaw "infostealer malware disguised as an AI personal assistant." A detailed technical analysis of the vulnerabilities can be found on the Vectra AI blog.

Warning From Within the Community

What Enterprises Can Learn From This

OpenClaw's security problems are not an argument against AI agents. They highlight what matters when implementing them.

Isolation Over Full Access

OpenClaw grants agents maximum system access — that is both its advantage and its greatest risk. In enterprise environments, the principle of least privilege must apply: agents receive only the access they need for their specific task.

Verify Trust Chains

The ClawdHub problem demonstrates that every extension architecture needs code review, sandboxing, and granular permissions. This applies to skill systems just as much as to plugins, extensions, or MCP servers. Those integrating AI agents into existing systems should consider the principles outlined in our article on AI integration in ERP, CRM, and PIM.

Configuration Is Not a Feature — It Is a Barrier

Eric Schwake of Salt Security puts it plainly: "A significant gap exists between the consumer enthusiasm for Clawdbot's one-click appeal and the technical expertise needed to operate a secure agentic gateway." Most users can install an AI agent. Very few can operate one securely.

GDPR Relevance Cannot Be Underestimated

A locally operated agent that accesses emails, chat histories, and documents processes personal data. When the agent operates through cloud APIs, that data flows to external providers. GDPR-compliant AI architecture becomes not optional but mandatory.

Conclusion: The Democratization of AI Agents Has Begun

In just a few weeks, OpenClaw has demonstrated what researchers have described for years: autonomous AI agents do not need to be built by large corporations. A single developer with a good idea can create a system that captivates millions.

According to IBM Research, Kaoutar El Maghraoui summarizes it aptly: OpenClaw proves that creating agents with real autonomy and real-world usefulness is "not limited to large enterprises" and "can also be community driven."

At the same time, OpenClaw exposes the tension between innovation and security. The capabilities are real. The risks are too. For enterprises, this means AI agents are no longer a future topic — they are here. But the path from an impressive weekend project to a production-ready enterprise system requires architecture, security concepts, and governance that go beyond the initial excitement.

Those looking to establish the foundations for secure AI agent deployment will find the strategic framework in our article on AI agents in the enterprise.

FAQ

What is OpenClaw (formerly ClawdBot)?

OpenClaw is an open-source AI agent that runs locally on your own machine and can be controlled via messaging apps like WhatsApp, Telegram, or Signal. It can manage files, execute commands, control browsers, and autonomously automate tasks.

Why was ClawdBot renamed to Moltbot and then OpenClaw?

Anthropic filed a trademark request because the name "ClawdBot" risked confusion with their AI product Claude. The project was first renamed to Moltbot and then to OpenClaw. The software itself remained unchanged.

Is OpenClaw safe to use?

Security researchers have discovered critical vulnerabilities: exposed API keys, missing authentication behind reverse proxies, and an unsecured skill library. The project's own maintainers warn that the tool is not suitable for the general public. For enterprise use, fundamental security mechanisms are missing.

What is Moltbook?

Moltbook is a social network for AI agents built on the OpenClaw ecosystem. Over 37,000 agents use the platform while humans can only observe. It demonstrates both the possibilities and risks of autonomous agent-to-agent communication.

How much does OpenClaw cost?

The software itself is free and open source. Costs arise from API calls to AI models like Claude or GPT. Depending on usage intensity, these costs can become significant according to Fast Company. Alternatively, local open-source models can be used, eliminating API costs entirely.

End of article

AI Readiness Check

Find out in 3 min. how AI-ready your company is.

Start now3 min. · Free

AI Insights for Decision Makers

Monthly insights on AI automation, software architecture, and digital transformation. No spam, unsubscribe anytime.

Let's talk

Questions about this article?.

Jamin Mahmood-Wiebe

Jamin Mahmood-Wiebe

Managing Director

Book appointment
WhatsAppQuick & direct

Send a message

This site is protected by reCAPTCHA and the Google Privacy Policy Terms of Service apply.