Agent Process Model
Crate: crates/infrastructure/agent-core/src/process.rs
Overview
Section titled “Overview”The agent process model defines four agent types organized around a team metaphor. Each type has a distinct role, model class, tool filter, and spawn permission. The Orchestrator is the single coordination point; all other types communicate exclusively through it (hub-and-spoke).
Context assembly is handled by the Context Pipeline, a deterministic infrastructure component (not an agent type).
The pipeline runs before the Orchestrator loop begins, assembling skill recommendations, curated memories, and workspace
metadata into a ContextPackage. A single-call Refinement Gate evaluates sufficiency. See
Agent Core System for the full pipeline specification.
The model is governed by one core principle: context compression. No agent type holds raw workspace content in its conversation history. Raw data flows through task agents (Researcher, Worker) and returns to the Orchestrator as structured summaries. This keeps the Orchestrator’s context window focused on conversation management and decision-making.
Agent Types at a Glance
Section titled “Agent Types at a Glance”| Type | Role | Spawned By | Context Contains |
|---|---|---|---|
| Orchestrator | User-facing coordinator | System | Conversation + structured results |
| Researcher | Read-only investigation | Orchestrator | Workspace data (read-only tools) |
| Worker | Task execution with scoped writes | Orchestrator | Task instructions + scoped tools |
| Skill Composer | Skill authoring and refinement | Orchestrator | Skill definitions + user feedback |
What Is NOT an Agent Type
Section titled “What Is NOT an Agent Type”The Context Pipeline is deterministic infrastructure, not an agent. It runs skill search, memory retrieval, and
workspace metadata queries in parallel (~100ms, 0 tokens), then passes the assembled ContextPackage through a
Refinement Gate (single Cheap LLM call, ~500ms). The pipeline replaces the earlier Librarian and Archivist specialist
agent types.
Three verbs, three owners:
- select (Context Pipeline) — chooses relevant skills, memories, and workspace metadata
- execute (Worker) — runs skill artifacts and performs workspace mutations
- investigate (Researcher) — reads workspace content, traverses references, searches history
Agent Type Registry
Section titled “Agent Type Registry”The agent type registry is the single source of truth for process type configuration. Each entry defines the model class, tool access, system prompt template, and spawn permissions for one agent type.
ProcessType Enum
Section titled “ProcessType Enum”/// The kind of process, which determines its default capabilities.#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]pub enum ProcessType { /// User-facing coordinator. Delegates to all other types. Orchestrator, /// Read-only investigation. Reports structured findings. Researcher, /// Task execution with scoped write access. Worker, /// Skill authoring and iterative refinement. SkillComposer,}Per-Type Configuration
Section titled “Per-Type Configuration”| Field | Type | Description |
|---|---|---|
process_type | ProcessType | Enum variant |
model_class | ModelClass | Routing hint for provider selection |
tool_filter | ToolFilter | Which tools this type can access |
system_prompt | String | Base system prompt (template, rendered per invocation) |
max_turns | u32 | Maximum LLM call turns before forced termination |
can_be_interrupted | bool | Whether the Orchestrator can cancel mid-execution |
can_spawn_processes | bool | Whether this type can spawn sub-processes |
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]pub enum ModelClass { /// Best available model. User-facing quality. Frontier, /// Fast, capable model. Throughput over polish. Fast, /// Cheapest available model. Background/bulk work. Cheap,}Default Configurations
Section titled “Default Configurations”impl ProcessConfig { pub fn orchestrator() -> Self { Self { process_type: ProcessType::Orchestrator, model_class: ModelClass::Frontier, tool_filter: ToolFilter::All, system_prompt: ORCHESTRATOR_SYSTEM_PROMPT.into(), max_turns: 50, can_be_interrupted: true, can_spawn_processes: true, } }
pub fn researcher() -> Self { Self { process_type: ProcessType::Researcher, model_class: ModelClass::Frontier, tool_filter: ToolFilter::ReadOnly, system_prompt: RESEARCHER_SYSTEM_PROMPT.into(), max_turns: 20, can_be_interrupted: true, can_spawn_processes: false, } }
pub fn worker() -> Self { Self { process_type: ProcessType::Worker, model_class: ModelClass::Fast, tool_filter: ToolFilter::All, system_prompt: WORKER_SYSTEM_PROMPT.into(), max_turns: 10, can_be_interrupted: true, can_spawn_processes: false, } }
pub fn skill_composer() -> Self { Self { process_type: ProcessType::SkillComposer, model_class: ModelClass::Frontier, tool_filter: ToolFilter::Named(hashset![ "create_skill", "update_skill", "add_artifact", "update_artifact", "validate_skill", "test_skill", "list_skills", "get_skill" ]), system_prompt: SKILL_COMPOSER_SYSTEM_PROMPT.into(), max_turns: 30, can_be_interrupted: true, can_spawn_processes: false, } }}Orchestrator
Section titled “Orchestrator”The Orchestrator is the user’s primary interface. It receives messages, delegates work to task agents, and synthesizes
results. Renamed from TeamLead in the original 3-type model.
Key constraints:
- Never holds raw workspace content. The Orchestrator’s context window contains conversation history, the
ContextPackagefrom the pipeline, and structured results from task agents. It never receives full page content, block lists, or search result bodies directly. - Never blocks on execution. Heavy work is delegated to Researcher or Worker processes immediately. Results are reported asynchronously.
- Receives ContextPackage from pipeline. Every user message triggers the Context Pipeline before the Orchestrator decides on an action. The pipeline assembles skill recommendations, curated memories, and workspace metadata.
- Has retrieval tools for mid-conversation refinement. Skill search and memory search are registered as Orchestrator tools for cases where the pipeline’s baseline context is insufficient.
Spawn permissions: Only the Orchestrator can spawn sub-processes. This is enforced at the ProcessManager level:
pub fn can_spawn(process_type: ProcessType) -> bool { matches!(process_type, ProcessType::Orchestrator)}Orchestrator Decision Flow
Section titled “Orchestrator Decision Flow”After receiving the ContextPackage from the pipeline, the Orchestrator decides one of:
- Direct response — Answer from conversation context alone (no delegation needed).
- Skill activation — Load skill artifacts recommended by the pipeline, dispatch Worker.
- Research dispatch — Spawn Researcher for investigation.
- Skill authoring — Spawn Skill Composer for skill creation/modification (user-initiated).
- Clarification — Ask user for more information.
Researcher
Section titled “Researcher”Read-only investigation process. Unchanged from the original model.
- Gathers information, analyzes structure, searches history.
- Reports structured findings back to the Orchestrator.
- Stores intermediate results in session scratchpad for the Orchestrator to query, rather than dumping full results into conversation context.
- Frontier model class — analysis quality drives research value.
- Cannot mutate workspace state (enforced by
ToolFilter::ReadOnly).
Worker
Section titled “Worker”Focused task execution with scoped write access.
- Creates pages, reorganizes subtrees, applies edits, runs skills.
- Can be fire-and-forget or interactive (Orchestrator awaits result).
- Fast model class — throughput over polish.
- RLM integration: For
code_templateskill artifacts, the Worker invokes the RLM executor (CPython-in-Wasmtime) to run LLM-generated analysis scripts. The Worker manages the RLM lifecycle: pre-warm during LLM inference, execute, collect results. - DSPy integration: For
dspy_moduleskill artifacts, the Worker invokes therun_skillhost function to proxy execution to the Python sidecar.
Skill Composer
Section titled “Skill Composer”The Skill Composer handles skill authoring and iterative refinement. It is invoked only when the user explicitly requests skill creation or modification. Operates in a dedicated channel, not part of the core conversation loop. Replaces the earlier “Trainer” concept.
Capabilities:
- Create new skills with multiple artifacts.
- Add, modify, or remove artifacts from existing skills.
- Validate skill structure (schema compliance, artifact coherence).
- Test skills against sample inputs.
- Iterative refinement loop with user feedback.
- DSPy optimization of prompt artifacts (when Python sidecar is available).
Characteristics:
- Frontier model class — creative, high-quality output for prompt engineering.
- Maximum 30 turns — skill authoring is conversational and iterative.
- Scoped tool access: skill CRUD tools only (no workspace write access).
- Cannot spawn sub-processes.
Skill Authoring Flow
Section titled “Skill Authoring Flow”Key Flows
Section titled “Key Flows”Per-Message Flow (Pipeline-Decide-Act)
Section titled “Per-Message Flow (Pipeline-Decide-Act)”Every user message follows this three-phase flow:
Skill Execution Flow
Section titled “Skill Execution Flow”Multi-Provider Routing Table
Section titled “Multi-Provider Routing Table”Each agent type maps to a model class, which the ProviderRegistry resolves to a concrete model based on the user’s
configured providers and API keys.
| Agent Type | Model Class | Default Max Turns | Rationale |
|---|---|---|---|
| Orchestrator | Frontier | 50 | User-facing quality; conversation coherence |
| Researcher | Frontier | 20 | Analysis quality drives research value |
| Worker | Fast | 10 | Throughput over polish; bounded task scope |
| Skill Composer | Frontier | 30 | Creative skill authoring requires top-tier reasoning |
| Refinement Gate | Cheap | 1 | Single accept/refine decision; cost-sensitive |
| Consolidation (bg) | Cheap | N/A | Memory maintenance at scale; cost-sensitive |
Fallback chains: On 429/502/503/504 from the primary provider, the harness retries with the next provider in the configured fallback chain. Rate-limited models enter a cooldown period. Context overflow detection triggers session continuity mechanisms before the next LLM call.
Supported providers: Anthropic (Claude), OpenAI (GPT), xAI (Grok), OpenRouter (100+ models via single API key or OAuth PKCE), Ollama (local, keyless).
Migration from TeamLead
Section titled “Migration from TeamLead”The TeamLead process type is renamed to Orchestrator. This is a breaking change to the ProcessType enum and all
code that references it.
Rename Checklist
Section titled “Rename Checklist”| Item | Location | Change |
|---|---|---|
ProcessType::TeamLead | crates/infrastructure/agent-core/src/process.rs | Rename to ProcessType::Orchestrator |
ProcessConfig::team_lead() | Same file | Rename to ProcessConfig::orchestrator() |
can_spawn() | Same file | Update match arm |
tool_allowed_for() | Same file | Update match arm |
ListProcessTypesTool | Same file | Update returned JSON |
| System prompt constant | Process config | Update “TeamLead” references to “Orchestrator” |
| Test assertions | process.rs tests | Update all ProcessType::TeamLead references |
| Agent Core System doc | apps/codex/src/content/docs/systems/agent/agent-core-system.mdx | Cross-reference this doc |
New Types to Add
Section titled “New Types to Add”In addition to the rename, one new ProcessType variant must be added:
SkillComposer— withProcessConfig::skill_composer()constructor
The ProcessManager::spawn() method requires no changes — it already accepts arbitrary ProcessConfig values. The new
type is purely additive to the enum and config constructors.
Types Removed
Section titled “Types Removed”The following types from the previous 6-type model are not implemented as agent types:
- Librarian — replaced by the Context Pipeline’s deterministic skill search
- Archivist — replaced by the Context Pipeline’s deterministic memory retrieval + Refinement Gate
- Trainer — renamed to Skill Composer with functional naming convention
Related Documents
Section titled “Related Documents”- Agent Core System — Parent system document
- Skill System — Skill package schema and execution model
- Agent Memory System — Memory matrix, decay model, consolidation
- LLM System — Multi-provider abstraction and routing
Was this page helpful?
Thanks for your feedback!