Skip to content
Documentation GitHub
Agent

Agent Settings

Agent Settings

Covers the full agent configuration surface: LLM provider selection (Anthropic, OpenAI, xAI, Ollama, OpenRouter), model switching, API key lifecycle (validate, store, hide, remove), proactive suggestion interval, capability defaults, and the guided first-time setup flow. Ollama-specific behaviors are given dedicated coverage: status indicator, hardware-aware recommendations, model picker with “Installed” badges, streaming download with progress bar, download error and retry, custom endpoint URL, and the Test Connection button. This spec is P1 because a misconfigured provider silently falls back to StubLlmProvider, producing unhelpful “not configured” errors that the user cannot diagnose from the UI.

Settings are persisted as JSON via update_agent_settings. API keys are stored exclusively in the OS keychain — the settings file only records a boolean api_key_configured flag. The bridge exposes get_settings, settings-mutation commands, and agent-specific commands including Ollama and OpenRouter routes. OS keychain operations (set_api_key, remove_api_key, validate_api_key) are exercised through the Tauri frontend UI path. The bridge exposes get_settings for post-save assertions.

Preconditions

  • HTTP bridge running on port 9990
  • A workspace initialized via initialize_workspace before each scenario (agent settings are global, not workspace-scoped, but a workspace must be open for agent commands to fire)
  • Bridge shim injected via playwright.config.ts

Scenarios

Seed: seed.spec.ts

1. First-time setup — guided flow presented when agent is unconfigured

On first open, when get_agent_settings returns null, the settings panel shows the guided setup flow instead of the configuration form.

Steps:

  1. Open the app with a fresh settings file (no prior agent configuration).
  2. Navigate to Settings → Agent.
  3. Observe the panel.

Expected: The [data-testid="agent-settings-setup"] element is visible. A “Set up AI agent” heading is shown. Two setup-path cards are visible: “Local Model (Free)” ([data-testid="setup-path-local"]) and at least one cloud provider option. The configuration form (provider selector, API key row, model selector) is not shown at this stage.

2. First-time setup — choosing Local Model initializes Ollama provider

Clicking the “Local Model” card transitions to step 1 of the guided flow and initializes agent settings with provider: "ollama".

Steps:

  1. Start from the unconfigured state (guided flow visible).
  2. Click the “Local Model (Free)” card ([data-testid="setup-path-local"]).
  3. Observe the panel and call get_settings via the bridge after the click.

Expected: update_agent_settings is called with a settings object where provider = "ollama". The bridge’s get_settings response shows agent.provider = "ollama". The panel transitions away from the path-choice step. The Ollama status section becomes visible.

3. Switch LLM provider from Anthropic to OpenAI

Changing the provider dropdown updates the persisted agent.provider field.

Steps:

  1. Ensure agent settings are configured with provider: "anthropic" (via update_agent_settings or prior test).
  2. Navigate to Settings → Agent.
  3. Change the provider selector to “OpenAI”.
  4. Observe the auto-save or save action.
  5. Call get_settings via the bridge to confirm persistence.

Expected: The bridge’s get_settings response shows agent.provider = "open_ai" (snake_case as produced by Specta from AgentProvider::OpenAi). The model selector updates to show OpenAI models. The API key row prompts for an OpenAI key (showing a “Configure” button if no key is set, or “Configured” status if one exists).

4. Switch model within a provider

Changing the model selection within the same provider updates only agent.model.

Steps:

  1. Ensure agent settings are configured with provider: "anthropic" and model: "claude-sonnet-4-6".
  2. Navigate to Settings → Agent.
  3. Change the model selector to a different Anthropic model (e.g., claude-opus-4-6).
  4. Call get_settings via the bridge.

Expected: The bridge response shows agent.model = "claude-opus-4-6". The provider remains "anthropic". The API key status is unchanged.

5. API key validation — invalid key is rejected with clear error

Submitting an invalid API key triggers validation before storage and displays a clear error message.

Steps:

  1. Navigate to Settings → Agent with an Anthropic provider selected and no key configured.
  2. Click “Configure” to show the key input field.
  3. Type a clearly invalid key (e.g., bad-key-123).
  4. Click “Save” (or the confirm button).
  5. Observe the error display.

Expected: The key is NOT stored. api_key_configured remains false in settings. The UI shows an error message: “Invalid API key. Please check and try again.” (or equivalent). The key input field remains visible so the user can correct the value.

6. API key validation — rate-limited response is surfaced to user

When the provider returns a rate-limit response during validation, a specific message is shown.

Steps:

  1. Configure a provider and enter a key that triggers a rate-limit response (simulate via test fixture or mock).
  2. Click “Save” on the key input.

Expected: The error message reads “Rate limited by provider. Please wait and try again.” api_key_configured remains false. The key is not stored in the keychain.

7. API key validation — network error is surfaced to user

When validate_api_key returns NetworkError(reason), the reason is shown in the UI.

Steps:

  1. Enter a key while the provider endpoint is unreachable (e.g., device offline or test fixture).
  2. Click “Save”.

Expected: The UI shows a message like “Network error: connection refused” (or the specific reason from the NetworkError variant). The key is not stored.

8. API key storage — key is not exposed in UI after save

After a successful key save, the key value is not visible in any UI element.

Steps:

  1. Save a valid API key for Anthropic.
  2. Navigate away from Settings and return to Settings → Agent.
  3. Observe the API key row.

Expected: The key input field is hidden (the row shows “Configured” status, not the raw key). No element on the page contains the actual key value. get_settings via the bridge returns api_key_configured: true but no api_key field — the key lives only in the OS keychain.

9. API key removal clears the configured flag

Removing a stored key sets api_key_configured = false and prompts for a new key.

Steps:

  1. Ensure a valid API key is configured for Anthropic.
  2. Navigate to Settings → Agent.
  3. Click “Remove” (or the trash icon) on the API key row.
  4. Confirm if prompted.
  5. Call get_settings via the bridge.

Expected: The bridge response shows agent.api_key_configured = false. The UI transitions to a “not configured” state for the key row, showing a “Configure” button. No key remains in the OS keychain for the Anthropic service.

10. Proactive suggestion interval — valid value is accepted and persisted

The proactive interval slider or input accepts values between 5 and 120 minutes.

Steps:

  1. Navigate to Settings → Agent.
  2. Set the proactive suggestion interval to 15 minutes.
  3. Call get_settings via the bridge.

Expected: The bridge response shows agent.proactive_interval_minutes = 15. The UI reflects the saved value.

11. Proactive suggestion interval — out-of-range values are clamped

Values outside the 5–120 minute range are silently clamped to the nearest boundary.

Steps:

  1. Call update_agent_settings via the bridge (or Tauri) with proactive_interval_minutes: 2 (below minimum).
  2. Call get_settings to read back the stored value.
  3. Repeat with proactive_interval_minutes: 200 (above maximum).

Expected: Reading back after step 2 shows proactive_interval_minutes: 5 (clamped to minimum). Reading back after step 3 shows proactive_interval_minutes: 120 (clamped to maximum). The AgentSettings::set_proactive_interval method applies clamping at the domain layer before persistence.

12. Settings persist across simulated app restart

Agent settings written in one session are available in a subsequent session (tested by reloading settings from storage).

Steps:

  1. Configure agent settings: provider = "open_ai", model = "gpt-4o", proactive_interval_minutes = 45.
  2. Reload the settings by navigating away and back to Settings → Agent (or by calling get_settings via the bridge after the save has committed to disk).
  3. Observe the loaded values.

Expected: The provider selector shows “OpenAI”. The model shows “gpt-4o”. The proactive interval shows 45 minutes. No fields have reverted to defaults. Settings are persisted as JSON atomically (temp file + rename), so they survive crashes between the write and the next read.

13. Ollama — status indicator shows running state with version

When Ollama is running on the default endpoint, the status section shows a green indicator and the version string.

Steps:

  1. Ensure Ollama is running on http://localhost:11434.
  2. Navigate to Settings → Agent and select “Ollama (Local)” as the provider.
  3. Observe the [data-testid="ollama-status-section"].

Expected: A green dot and the text “Running (vX.Y.Z)” are visible in [data-testid="ollama-status-text"]. The version string comes from OllamaStatus.version. The “Retry” button is available but the status shows success.

When Ollama is not running, the status shows a red indicator and an install prompt.

Steps:

  1. Ensure Ollama is NOT running (stopped or not installed).
  2. Navigate to Settings → Agent and select “Ollama (Local)”.
  3. Observe the status section.

Expected: A red dot and the text “Not Running” are shown in [data-testid="ollama-status-text"]. The text ”— Install Ollama from ollama.com” appears alongside the indicator. The “Retry” button is visible. No model picker is shown (because Ollama is not running).

The model picker displays recommended models filtered by the detected hardware tier.

Steps:

  1. Navigate to Settings → Agent with Ollama provider selected and Ollama running.
  2. Observe the [data-testid="ollama-model-picker"].

Expected: The “Recommended for your system” section lists models whose min_tier is at or below the detected hardware tier. Models exceeding the hardware tier are not shown in the recommended section. Each listed model shows its display name, description, and size in GB. If a model is installed, it shows an “Installed” badge ([data-testid="ollama-installed-badge-{name}"]). If it is not installed, a “Download” button ([data-testid="ollama-download-{name}"]) is shown.

16. Ollama — model download with streaming progress bar

Clicking “Download” on an uninstalled model starts a pull_ollama_model request and shows a streaming progress bar via ollama:pull-progress events.

Steps:

  1. Navigate to Settings → Agent with Ollama provider selected and Ollama running.
  2. Find an uninstalled recommended model in the picker.
  3. Click its “Download” button ([data-testid="ollama-download-{name}"]).
  4. Observe the progress display in [data-testid="ollama-pull-progress-{name}"].

Expected: The “Download” button disappears. A progress section appears showing: (a) the current status text (e.g., “downloading”), (b) a percentage label (e.g., “42%”), and (c) a filled progress bar whose width reflects the percentage. When the download completes (status: "complete"), the progress section disappears and the model shows an “Installed” badge. The model is automatically selected as the active model.

17. Ollama — download error shows retry button

If ollama:pull-progress emits status: "error", an error message and a retry button appear.

Steps:

  1. Initiate a model download that fails (simulate by providing an invalid model name or disconnecting from Ollama mid-pull).
  2. Observe the progress section for the model.

Expected: The progress bar is replaced by a red error message: “Error: Download failed” (or the specific error string from the event). A “Retry” button appears, allowing the user to restart the pull. Clicking Retry clears the error state and initiates a new pull_ollama_model request.

18. Ollama — custom endpoint URL configuration

The “Advanced” section allows changing the Ollama endpoint URL, which is persisted in agent.ollama_url.

Steps:

  1. Navigate to Settings → Agent with Ollama provider selected.
  2. Click “Advanced” ([data-testid="ollama-advanced-toggle"]) to expand the section.
  3. Clear the “Endpoint” input ([data-testid="ollama-url-input"]) and type http://192.168.1.100:11434.
  4. Tab out of the field (triggering the onBlur save).
  5. Call get_settings via the bridge.

Expected: The bridge response shows agent.ollama_url = "http://192.168.1.100:11434". The field is saved on blur, not requiring an explicit save button. The status check uses the updated URL on the next Retry press.

19. Ollama — Test Connection button validates the custom endpoint

The “Test Connection” button in the Advanced section sends check_ollama_status to the custom URL and shows the result inline.

Steps:

  1. Open the Advanced section for Ollama settings.
  2. Change the endpoint URL to a reachable custom address.
  3. Click “Test Connection” ([data-testid="ollama-test-connection"]).
  4. Wait for the result.

Expected: While testing, the button label changes to “Testing…” and is disabled. After the check completes, [data-testid="ollama-test-result"] appears with either “Connected (vX.Y.Z)” in green (if reachable) or “Connection failed” in red (if not). The main status section is not affected by this test.

20. Ollama — API key row is hidden when Ollama is selected

Ollama is a keyless provider. The API key configuration row must not appear when Ollama is the active provider.

Steps:

  1. Select “Ollama (Local)” as the provider.
  2. Observe the settings form.

Expected: No “API Key” row, “Configure” button, or key-related UI element is visible. The Ollama status section and model picker take the place of the key configuration row. api_key_configured in settings remains false for Ollama because no key is needed.

21. OpenRouter — OAuth PKCE flow initiation

Selecting OpenRouter as provider and clicking “Connect via OpenRouter” starts the start_openrouter_auth command and opens the authorization URL in the system browser.

Steps:

  1. Select “OpenRouter” as the provider.
  2. Observe the authentication UI (OAuth button vs. manual key input).
  3. Click “Connect via OpenRouter” (the primary auth path).

Expected: start_openrouter_auth is called on the backend. The system browser opens to an OpenRouter authorization URL (containing a PKCE challenge). The UI shows an “Authorization in progress…” state. A “Cancel” button is available. The app listens for an openrouter://auth-callback deep link to complete the flow.

22. OpenRouter — OAuth cancellation clears pending state

Clicking “Cancel” during the OpenRouter OAuth flow resets the UI without storing a key.

Steps:

  1. Start the OpenRouter OAuth flow (as in scenario 21).
  2. Click “Cancel” before completing authorization in the browser.

Expected: The UI returns to the OpenRouter configuration state with no api_key_configured = true. agent.api_key_configured remains false. No listener for the openrouter://auth-callback event remains active.

23. Default provider when no keys are configured

When agent settings exist but no API key is configured and the provider is not Ollama, the agent starts with StubLlmProvider and returns a “not configured” error to the user.

Steps:

  1. Configure agent settings with provider: "anthropic", api_key_configured: false.
  2. Start the agent harness via start_agent.
  3. Send a message to the agent.

Expected: The agent responds with a “not configured” error message in the conversation thread (text from StubLlmProvider). No LlmError::AuthFailure is swallowed silently. The agent status does not enter an Error state — the stub returns a Provider error that the harness handles gracefully.

24. Capability defaults — pre-configured capabilities applied on first setup

When the guided setup flow initializes agent settings, default capability grants are applied from DEFAULT_CAPABILITIES.

Steps:

  1. Start from unconfigured state and complete the guided setup flow for any provider.
  2. Call get_settings via the bridge.
  3. Inspect agent.capabilities.

Expected: The capabilities map contains at minimum: PagesRead: true, SearchUse: true, TagsRead: true. Write capabilities (PagesWrite, PagesDelete, PagesOrganize, AttachmentsWrite, TagsWrite) default to false, requiring explicit user grants. The defaults match the DEFAULT_CAPABILITIES constant in AgentSettings.tsx.

Test Data

KeyValueNotes
anthropic_provideranthropicSerialized value in settings JSON (snake_case from Specta)
openai_provideropen_aiSerialized value — note underscore, not camelCase
xai_providerxaiSerialized value
ollama_providerollamaKeyless local provider
openrouter_provideropen_routerSerialized value
default_ollama_urlhttp://localhost:11434Default Ollama endpoint, set by default_ollama_url() in domain
default_proactive_interval30Minutes; constant DEFAULT_PROACTIVE_INTERVAL in domain
min_proactive_interval5Minimum; values below are clamped to 5
max_proactive_interval120Maximum; values above are clamped to 120
default_capabilities_onPagesRead, SearchUse, TagsReadCapabilities that default to true in DEFAULT_CAPABILITIES
default_capabilities_offPagesWrite, PagesDelete, etc.Capabilities that default to false
ollama_status_testidollama-status-sectionContainer for the running/not-running indicator
ollama_status_text_testidollama-status-textSpan showing “Running (vX.Y.Z)” or “Not Running”
ollama_model_picker_testidollama-model-pickerModel list container
ollama_advanced_toggle_testidollama-advanced-toggleButton that expands the custom endpoint section
ollama_url_input_testidollama-url-inputText input for the custom Ollama endpoint
ollama_test_conn_testidollama-test-connectionButton that calls check_ollama_status against the custom URL
ollama_test_result_testidollama-test-resultInline result span after Test Connection completes
keychain_service_anthropicinklings-agent-anthropicOS keychain service name for Anthropic keys
keychain_service_openaiinklings-agent-openaiOS keychain service name for OpenAI keys
keychain_service_openrouterinklings-agent-openrouterOS keychain service name for OpenRouter keys
settings_schema_version9Current schema version; v9 added OpenRouter

Notes

  • The HTTP bridge exposes agent settings commands including get_settings, update_agent_settings, and the full suite of agent configuration handlers. Ollama-specific routes (check_ollama_status, get_ollama_models, pull_ollama_model, get_recommended_models, get_system_capabilities) and OpenRouter OAuth routes (start_openrouter_auth, complete_openrouter_auth) are also wired into the bridge. OS keychain operations (set_api_key, remove_api_key, validate_api_key) are exercised via the frontend UI path.
  • The API key is NEVER written to the settings JSON file. After set_api_key succeeds, get_settings returns agent.api_key_configured = true but contains no api_key field. Tests must not assert the presence of the raw key in any JSON or DOM element.
  • AgentProvider serializes in snake_case (Specta convention from Rust). The values are "anthropic", "open_ai", "xai", "ollama", "open_router". Tests that inspect get_settings JSON must use these exact string values, not the display labels (“OpenAI”, “xAI”).
  • The Ollama provider does not use the api_key_configured path. build_llm_provider in agent.rs checks provider == Ollama before the keychain path and constructs an Ollama provider directly using ollama_url and model.
  • validate_api_key makes a real network call to the provider’s endpoint. E2E tests relying on this command need either a live key or a mock server that returns the expected HTTP response codes (401 for Invalid, 429 for RateLimited).
  • The pull_ollama_model command is fire-and-forget: it returns immediately and emits ollama:pull-progress Tauri events asynchronously. Progress event payloads contain { model, status, percent, completed, total }. The final event has status: "complete" or status: "error".
  • Settings are persisted atomically via SettingsRepository.save() using a temp file + rename strategy. The bridge test should call get_settings only after the update_agent_settings command has returned Ok to guarantee the file has been written.
  • OpenRouter OAuth uses a PKCE flow (start_openrouter_auth → browser redirect → inklings://auth/openrouter?code=<code> deep link → complete_openrouter_auth). End-to-end testing of this flow requires a real OpenRouter account and system browser interaction. Bridge-only tests should verify the command contract for complete_openrouter_auth (rejects calls without a prior start_openrouter_auth).

Was this page helpful?