Skip to main content
Back to blog
AI & Technology

OpenClaw: The AI Agent That Actually Does Stuff

OpenClaw went from weekend project to 100,000 GitHub stars in two days. What it is, why it matters for local AI and system design — and what to watch out for.

OpenClaw went from weekend project to 100,000 GitHub stars in two days. What it is, why it matters for local AI and system design — and what to watch out for.

6 min read
OpenClawAI AgentsLocal AIOpen SourceAutomation
Working surfaces referenced in the article

OpenClaw went from weekend project to 100,000 GitHub stars in two days. What it is, why it matters for local AI and system design — and what to watch out for.

In late January 2026, an open-source project gained over 100,000 GitHub stars in roughly two days — the fastest repo to hit that milestone in GitHub history. Its name kept changing: Clawdbot, then Moltbot, then OpenClaw. But the idea stayed the same: an AI assistant that doesn’t just answer questions. It does things. It runs on your machine, connects to your files, shell, browser, and dozens of services, and it can act on its own thanks to a heartbeat that checks for work every 30 minutes.

Here’s what makes OpenClaw worth paying attention to — and what to think about before you run it in production.

From weekend project to viral

OpenClaw was created in November 2025 by Peter Steinberger, founder of PSPDFKit, as a small experiment called Clawdbot. He open-sourced it and didn’t expect much. Then it blew up. Anthropic raised a trademark concern over the name; it was rebranded to Moltbot, then the community settled on OpenClaw. By February 2026 it had 182,000+ stars, 29,600+ forks, and had drawn 2 million visitors to its site in a single week. Alibaba and Tencent adopted it; some Korean tech firms banned it. It’s been called a "security dumpster fire" and a "security nightmare" — and also one of the most significant open-source agent projects to date.

That tension is exactly why it’s worth understanding.

What OpenClaw actually does

Unlike a chatbot that waits for your prompt, OpenClaw is an autonomous agent. You run it locally (Node.js, typically on localhost). It connects large language models — Claude, GPT-4, Gemini, or fully local models via Ollama — to your system. It can read and write files, run shell commands, control a real browser, send email, manage calendars, talk to GitHub, and operate through WhatsApp, Telegram, Slack, Discord, Signal, iMessage, Teams, and more. Memory is stored as local Markdown files. No cloud lock-in. Your data stays on your machine unless you explicitly send it somewhere.

The heartbeat is what sets it apart: every 30 minutes (or every hour with some auth setups), it checks for pending tasks and can act without you typing a thing. That’s the shift from "AI that explains how" to "AI that does."

Architecture in a nutshell

A central Gateway (Node.js daemon, port 18789 by default) handles messages from all your channels, manages sessions, and runs requests one at a time per session to avoid race conditions. The Agent Runtime loads session history and long-term memory from disk, picks only the skills relevant to your request, and runs the classic loop: send context to the LLM, get back either text or a tool call, execute the tool, feed the result back, repeat until the model has a final answer. Skills are extensions — built-in ones for browser, files, shell, plus 100+ community skills on ClawHub (Todoist, GitHub, Gmail, Obsidian, Home Assistant, etc.). Skills are documented in SKILL.md and can follow the same AgentSkills spec used by Cursor, VS Code, and GitHub Copilot, so the ecosystem is larger than ClawHub alone.

Why it matters for system design

If you’re building products or platforms that will use AI, OpenClaw is a useful reference. It shows what “local-first, model-agnostic, messaging-native” looks like in practice: a single gateway, clear separation between channels and the agent loop, and memory as plain files. The security incidents — malicious ClawHub skills, a critical RCE (CVE-2026-25253, patched quickly), and the removal of auth: none — are a reminder that agent systems with broad system access need strict defaults, allowlists, and careful skill vetting. Steinberger has said he "ships code he doesn’t read," which explains both the speed and the security headaches. For your own systems, the lesson is: design for least privilege and assume skills and plugins are untrusted until proven otherwise.

If you’re scoping where agents fit in your architecture, the AI System Architecture Advisor can help you map stack, components, and guardrails. For strategy and tool choices, the Decision Engine offers a structured take. When you’re ready to turn that into a concrete plan, book a strategy call.

Next step

Need a second opinion on your own system?

Use the article as a filter, then move into a real review of product direction, architecture and AI fit.