Skip to main content

Architecture Overview

OpenClaw uses a Gateway-centric architecture where a single long-running process orchestrates communication between an LLM (the "Brain"), local execution capabilities (the "Hands"), persistent memory, and external messaging channels.

High-Level Architecture​

Component Summary​

ComponentRoleDetails
GatewayCentral processWebSocket server, daemon, process manager
BrainReasoningLLM API calls (Anthropic, OpenAI, xAI, local)
HandsExecutionShell, filesystem, browser automation
MemoryPersistenceLocal Markdown files in ~/.openclaw/memory/
HeartbeatAutonomyPeriodic task checker (default: every 30 min)
ChannelsI/OMessaging platform bridges
SkillsExtensionsYAML+Markdown skill definitions
RouterDispatchRoutes messages to appropriate handlers

Data Flow​

  1. Input arrives — via a channel (WhatsApp message), heartbeat trigger, or CLI command
  2. Router dispatches — determines if this needs the Brain, a skill, or a direct response
  3. Brain reasons — the LLM analyzes the request and decides on a plan
  4. Hands execute — shell commands, file operations, browser actions
  5. Memory updates — relevant context saved for future conversations
  6. Response sent — back through the originating channel

Design Principles​

  • Single process — One gateway manages everything. No microservices, no containers required.
  • Local-first — All data stays on disk. No cloud dependencies beyond the LLM API.
  • Model-agnostic — Swap LLM providers by changing a config line.
  • Extensible — Skills are just Markdown files. Anyone can write them.
  • Transparent — Memory is human-readable Markdown. Config is YAML. No black boxes.

Deep Dives​

  • Gateway — The central process and WebSocket control plane
  • Brain & Hands — LLM integration and execution environment
  • Memory System — How persistent memory works
  • Heartbeat — The autonomous task loop