What the timeline shows
The session walks through a realistic flow with representative token counts:- Before you type anything: CLAUDE.md, auto memory, MCP tool names, and skill descriptions all load into context. Your own setup may add more here, like an output style or text from
--append-system-prompt, which both go into the system prompt the same way. - As Claude works: each file read adds to context, path-scoped rules load automatically alongside matching files, and a PostToolUse hook fires after each edit.
- The follow-up prompt: a subagent handles the research in its own separate context window, so the large file reads stay out of yours. Only the summary and a small metadata trailer come back.
- At the end:
/compactreplaces the conversation with a structured summary. Most startup content reloads automatically. The skill listing is the one exception.
Check your own session
The visualization uses representative numbers. To see your actual context usage at any point, run/context for a live breakdown by category with optimization suggestions. Run /memory to check which CLAUDE.md and auto memory files loaded at startup.
Related resources
For deeper coverage of the features shown in the timeline, see these pages:- Extend Claude Code: when to use CLAUDE.md vs skills vs rules vs hooks vs MCP
- Store instructions and memories: CLAUDE.md hierarchy and auto memory
- Subagents: delegate research to a separate context window
- Best practices: managing context as your primary constraint
- Reduce token usage: strategies for keeping context usage low