Compaction
Context Window & Compaction
Every model has a context window (max tokens it can see). Long-running chats accumulate messages and tool results; once the window is tight, OpenClaw compacts older history to stay within limits.
What compaction is
Compaction summarizes older conversation into a compact summary entry and keeps recent messages intact. The summary is stored in the session history, so future requests use:
- The compaction summary
- Recent messages after the compaction point
Compaction persists in the session’s JSONL history.
Configuration
Use the agents.defaults.compaction setting in your openclaw.json to configure compaction behavior (mode, target tokens, etc.).
Compaction summarization preserves opaque identifiers by default (identifierPolicy: "strict"). You can override this with identifierPolicy: "off" or provide custom text with identifierPolicy: "custom" and identifierInstructions.
Auto-compaction (default on)
When a session nears or exceeds the model’s context window, OpenClaw triggers auto-compaction and may retry the original request using the compacted context.
You’ll see:
🧹 Auto-compaction completein verbose mode/statusshowing🧹 Compactions: <count>
Before compaction, OpenClaw can run a silent memory flush turn to store durable notes to disk. See Memory for details and config.
Manual compaction
Use /compact (optionally with instructions) to force a compaction pass:
/compact Focus on decisions and open questions
Context window source
Context window is model-specific. OpenClaw uses the model definition from the configured provider catalog to determine limits.
Compaction vs pruning
- Compaction: summarises and persists in JSONL.
- Session pruning: trims old tool results only, in-memory, per request.
See /concepts/session-pruning for pruning details.
OpenAI server-side compaction
OpenClaw also supports OpenAI Responses server-side compaction hints for compatible direct OpenAI models. This is separate from local OpenClaw compaction and can run alongside it.
- Local compaction: OpenClaw summarizes and persists into session JSONL.
- Server-side compaction: OpenAI compacts context on the provider side when
store+context_managementare enabled.
See OpenAI provider for model params and overrides.
Tips
- Use
/compactwhen sessions feel stale or context is bloated. - Large tool outputs are already truncated; pruning can further reduce tool-result buildup.
- If you need a fresh slate,
/newor/resetstarts a new session id.