Skip to content
Documentation GitHub
Agent

Agent Process Model

Crate: crates/infrastructure/agent-core/src/process.rs


The agent process model defines four agent types organized around a team metaphor. Each type has a distinct role, model class, tool filter, and spawn permission. The Orchestrator is the single coordination point; all other types communicate exclusively through it (hub-and-spoke).

Context assembly is handled by the Context Pipeline, a deterministic infrastructure component (not an agent type). The pipeline runs before the Orchestrator loop begins, assembling skill recommendations, curated memories, and workspace metadata into a ContextPackage. A single-call Refinement Gate evaluates sufficiency. See Agent Core System for the full pipeline specification.

The model is governed by one core principle: context compression. No agent type holds raw workspace content in its conversation history. Raw data flows through task agents (Researcher, Worker) and returns to the Orchestrator as structured summaries. This keeps the Orchestrator’s context window focused on conversation management and decision-making.

TypeRoleSpawned ByContext Contains
OrchestratorUser-facing coordinatorSystemConversation + structured results
ResearcherRead-only investigationOrchestratorWorkspace data (read-only tools)
WorkerTask execution with scoped writesOrchestratorTask instructions + scoped tools
Skill ComposerSkill authoring and refinementOrchestratorSkill definitions + user feedback

The Context Pipeline is deterministic infrastructure, not an agent. It runs skill search, memory retrieval, and workspace metadata queries in parallel (~100ms, 0 tokens), then passes the assembled ContextPackage through a Refinement Gate (single Cheap LLM call, ~500ms). The pipeline replaces the earlier Librarian and Archivist specialist agent types.

Three verbs, three owners:

  • select (Context Pipeline) — chooses relevant skills, memories, and workspace metadata
  • execute (Worker) — runs skill artifacts and performs workspace mutations
  • investigate (Researcher) — reads workspace content, traverses references, searches history

The agent type registry is the single source of truth for process type configuration. Each entry defines the model class, tool access, system prompt template, and spawn permissions for one agent type.

/// The kind of process, which determines its default capabilities.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub enum ProcessType {
/// User-facing coordinator. Delegates to all other types.
Orchestrator,
/// Read-only investigation. Reports structured findings.
Researcher,
/// Task execution with scoped write access.
Worker,
/// Skill authoring and iterative refinement.
SkillComposer,
}
FieldTypeDescription
process_typeProcessTypeEnum variant
model_classModelClassRouting hint for provider selection
tool_filterToolFilterWhich tools this type can access
system_promptStringBase system prompt (template, rendered per invocation)
max_turnsu32Maximum LLM call turns before forced termination
can_be_interruptedboolWhether the Orchestrator can cancel mid-execution
can_spawn_processesboolWhether this type can spawn sub-processes
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum ModelClass {
/// Best available model. User-facing quality.
Frontier,
/// Fast, capable model. Throughput over polish.
Fast,
/// Cheapest available model. Background/bulk work.
Cheap,
}
impl ProcessConfig {
pub fn orchestrator() -> Self {
Self {
process_type: ProcessType::Orchestrator,
model_class: ModelClass::Frontier,
tool_filter: ToolFilter::All,
system_prompt: ORCHESTRATOR_SYSTEM_PROMPT.into(),
max_turns: 50,
can_be_interrupted: true,
can_spawn_processes: true,
}
}
pub fn researcher() -> Self {
Self {
process_type: ProcessType::Researcher,
model_class: ModelClass::Frontier,
tool_filter: ToolFilter::ReadOnly,
system_prompt: RESEARCHER_SYSTEM_PROMPT.into(),
max_turns: 20,
can_be_interrupted: true,
can_spawn_processes: false,
}
}
pub fn worker() -> Self {
Self {
process_type: ProcessType::Worker,
model_class: ModelClass::Fast,
tool_filter: ToolFilter::All,
system_prompt: WORKER_SYSTEM_PROMPT.into(),
max_turns: 10,
can_be_interrupted: true,
can_spawn_processes: false,
}
}
pub fn skill_composer() -> Self {
Self {
process_type: ProcessType::SkillComposer,
model_class: ModelClass::Frontier,
tool_filter: ToolFilter::Named(hashset![
"create_skill", "update_skill", "add_artifact",
"update_artifact", "validate_skill", "test_skill",
"list_skills", "get_skill"
]),
system_prompt: SKILL_COMPOSER_SYSTEM_PROMPT.into(),
max_turns: 30,
can_be_interrupted: true,
can_spawn_processes: false,
}
}
}

The Orchestrator is the user’s primary interface. It receives messages, delegates work to task agents, and synthesizes results. Renamed from TeamLead in the original 3-type model.

Key constraints:

  • Never holds raw workspace content. The Orchestrator’s context window contains conversation history, the ContextPackage from the pipeline, and structured results from task agents. It never receives full page content, block lists, or search result bodies directly.
  • Never blocks on execution. Heavy work is delegated to Researcher or Worker processes immediately. Results are reported asynchronously.
  • Receives ContextPackage from pipeline. Every user message triggers the Context Pipeline before the Orchestrator decides on an action. The pipeline assembles skill recommendations, curated memories, and workspace metadata.
  • Has retrieval tools for mid-conversation refinement. Skill search and memory search are registered as Orchestrator tools for cases where the pipeline’s baseline context is insufficient.

Spawn permissions: Only the Orchestrator can spawn sub-processes. This is enforced at the ProcessManager level:

pub fn can_spawn(process_type: ProcessType) -> bool {
matches!(process_type, ProcessType::Orchestrator)
}

After receiving the ContextPackage from the pipeline, the Orchestrator decides one of:

  1. Direct response — Answer from conversation context alone (no delegation needed).
  2. Skill activation — Load skill artifacts recommended by the pipeline, dispatch Worker.
  3. Research dispatch — Spawn Researcher for investigation.
  4. Skill authoring — Spawn Skill Composer for skill creation/modification (user-initiated).
  5. Clarification — Ask user for more information.

Read-only investigation process. Unchanged from the original model.

  • Gathers information, analyzes structure, searches history.
  • Reports structured findings back to the Orchestrator.
  • Stores intermediate results in session scratchpad for the Orchestrator to query, rather than dumping full results into conversation context.
  • Frontier model class — analysis quality drives research value.
  • Cannot mutate workspace state (enforced by ToolFilter::ReadOnly).

Focused task execution with scoped write access.

  • Creates pages, reorganizes subtrees, applies edits, runs skills.
  • Can be fire-and-forget or interactive (Orchestrator awaits result).
  • Fast model class — throughput over polish.
  • RLM integration: For code_template skill artifacts, the Worker invokes the RLM executor (CPython-in-Wasmtime) to run LLM-generated analysis scripts. The Worker manages the RLM lifecycle: pre-warm during LLM inference, execute, collect results.
  • DSPy integration: For dspy_module skill artifacts, the Worker invokes the run_skill host function to proxy execution to the Python sidecar.

The Skill Composer handles skill authoring and iterative refinement. It is invoked only when the user explicitly requests skill creation or modification. Operates in a dedicated channel, not part of the core conversation loop. Replaces the earlier “Trainer” concept.

Capabilities:

  • Create new skills with multiple artifacts.
  • Add, modify, or remove artifacts from existing skills.
  • Validate skill structure (schema compliance, artifact coherence).
  • Test skills against sample inputs.
  • Iterative refinement loop with user feedback.
  • DSPy optimization of prompt artifacts (when Python sidecar is available).

Characteristics:

  • Frontier model class — creative, high-quality output for prompt engineering.
  • Maximum 30 turns — skill authoring is conversational and iterative.
  • Scoped tool access: skill CRUD tools only (no workspace write access).
  • Cannot spawn sub-processes.

Every user message follows this three-phase flow:

Each agent type maps to a model class, which the ProviderRegistry resolves to a concrete model based on the user’s configured providers and API keys.

Agent TypeModel ClassDefault Max TurnsRationale
OrchestratorFrontier50User-facing quality; conversation coherence
ResearcherFrontier20Analysis quality drives research value
WorkerFast10Throughput over polish; bounded task scope
Skill ComposerFrontier30Creative skill authoring requires top-tier reasoning
Refinement GateCheap1Single accept/refine decision; cost-sensitive
Consolidation (bg)CheapN/AMemory maintenance at scale; cost-sensitive

Fallback chains: On 429/502/503/504 from the primary provider, the harness retries with the next provider in the configured fallback chain. Rate-limited models enter a cooldown period. Context overflow detection triggers session continuity mechanisms before the next LLM call.

Supported providers: Anthropic (Claude), OpenAI (GPT), xAI (Grok), OpenRouter (100+ models via single API key or OAuth PKCE), Ollama (local, keyless).

The TeamLead process type is renamed to Orchestrator. This is a breaking change to the ProcessType enum and all code that references it.

ItemLocationChange
ProcessType::TeamLeadcrates/infrastructure/agent-core/src/process.rsRename to ProcessType::Orchestrator
ProcessConfig::team_lead()Same fileRename to ProcessConfig::orchestrator()
can_spawn()Same fileUpdate match arm
tool_allowed_for()Same fileUpdate match arm
ListProcessTypesToolSame fileUpdate returned JSON
System prompt constantProcess configUpdate “TeamLead” references to “Orchestrator”
Test assertionsprocess.rs testsUpdate all ProcessType::TeamLead references
Agent Core System docapps/codex/src/content/docs/systems/agent/agent-core-system.mdxCross-reference this doc

In addition to the rename, one new ProcessType variant must be added:

  • SkillComposer — with ProcessConfig::skill_composer() constructor

The ProcessManager::spawn() method requires no changes — it already accepts arbitrary ProcessConfig values. The new type is purely additive to the enum and config constructors.

The following types from the previous 6-type model are not implemented as agent types:

  • Librarian — replaced by the Context Pipeline’s deterministic skill search
  • Archivist — replaced by the Context Pipeline’s deterministic memory retrieval + Refinement Gate
  • Trainer — renamed to Skill Composer with functional naming convention

Was this page helpful?