Agent Settings
Agent Settings
Covers the full agent configuration surface: LLM provider selection (Anthropic, OpenAI, xAI, Ollama, OpenRouter), model
switching, API key lifecycle (validate, store, hide, remove), proactive suggestion interval, capability defaults, and
the guided first-time setup flow. Ollama-specific behaviors are given dedicated coverage: status indicator,
hardware-aware recommendations, model picker with “Installed” badges, streaming download with progress bar, download
error and retry, custom endpoint URL, and the Test Connection button. This spec is P1 because a misconfigured provider
silently falls back to StubLlmProvider, producing unhelpful “not configured” errors that the user cannot diagnose from
the UI.
Settings are persisted as JSON via update_agent_settings. API keys are stored exclusively in the OS keychain — the
settings file only records a boolean api_key_configured flag. The bridge exposes get_settings, settings-mutation
commands, and agent-specific commands including Ollama and OpenRouter routes. OS keychain operations (set_api_key,
remove_api_key, validate_api_key) are exercised through the Tauri frontend UI path. The bridge exposes
get_settings for post-save assertions.
Preconditions
- HTTP bridge running on port 9990
- A workspace initialized via
initialize_workspacebefore each scenario (agent settings are global, not workspace-scoped, but a workspace must be open for agent commands to fire) - Bridge shim injected via
playwright.config.ts
Scenarios
Seed: seed.spec.ts
1. First-time setup — guided flow presented when agent is unconfigured
On first open, when get_agent_settings returns null, the settings panel shows the guided setup flow instead of the
configuration form.
Steps:
- Open the app with a fresh settings file (no prior agent configuration).
- Navigate to Settings → Agent.
- Observe the panel.
Expected: The [data-testid="agent-settings-setup"] element is visible. A “Set up AI agent” heading is shown. Two
setup-path cards are visible: “Local Model (Free)” ([data-testid="setup-path-local"]) and at least one cloud provider
option. The configuration form (provider selector, API key row, model selector) is not shown at this stage.
2. First-time setup — choosing Local Model initializes Ollama provider
Clicking the “Local Model” card transitions to step 1 of the guided flow and initializes agent settings with
provider: "ollama".
Steps:
- Start from the unconfigured state (guided flow visible).
- Click the “Local Model (Free)” card (
[data-testid="setup-path-local"]). - Observe the panel and call
get_settingsvia the bridge after the click.
Expected: update_agent_settings is called with a settings object where provider = "ollama". The bridge’s
get_settings response shows agent.provider = "ollama". The panel transitions away from the path-choice step. The
Ollama status section becomes visible.
3. Switch LLM provider from Anthropic to OpenAI
Changing the provider dropdown updates the persisted agent.provider field.
Steps:
- Ensure agent settings are configured with
provider: "anthropic"(viaupdate_agent_settingsor prior test). - Navigate to Settings → Agent.
- Change the provider selector to “OpenAI”.
- Observe the auto-save or save action.
- Call
get_settingsvia the bridge to confirm persistence.
Expected: The bridge’s get_settings response shows agent.provider = "open_ai" (snake_case as produced by Specta
from AgentProvider::OpenAi). The model selector updates to show OpenAI models. The API key row prompts for an OpenAI
key (showing a “Configure” button if no key is set, or “Configured” status if one exists).
4. Switch model within a provider
Changing the model selection within the same provider updates only agent.model.
Steps:
- Ensure agent settings are configured with
provider: "anthropic"andmodel: "claude-sonnet-4-6". - Navigate to Settings → Agent.
- Change the model selector to a different Anthropic model (e.g.,
claude-opus-4-6). - Call
get_settingsvia the bridge.
Expected: The bridge response shows agent.model = "claude-opus-4-6". The provider remains "anthropic". The API
key status is unchanged.
5. API key validation — invalid key is rejected with clear error
Submitting an invalid API key triggers validation before storage and displays a clear error message.
Steps:
- Navigate to Settings → Agent with an Anthropic provider selected and no key configured.
- Click “Configure” to show the key input field.
- Type a clearly invalid key (e.g.,
bad-key-123). - Click “Save” (or the confirm button).
- Observe the error display.
Expected: The key is NOT stored. api_key_configured remains false in settings. The UI shows an error message:
“Invalid API key. Please check and try again.” (or equivalent). The key input field remains visible so the user can
correct the value.
6. API key validation — rate-limited response is surfaced to user
When the provider returns a rate-limit response during validation, a specific message is shown.
Steps:
- Configure a provider and enter a key that triggers a rate-limit response (simulate via test fixture or mock).
- Click “Save” on the key input.
Expected: The error message reads “Rate limited by provider. Please wait and try again.” api_key_configured
remains false. The key is not stored in the keychain.
7. API key validation — network error is surfaced to user
When validate_api_key returns NetworkError(reason), the reason is shown in the UI.
Steps:
- Enter a key while the provider endpoint is unreachable (e.g., device offline or test fixture).
- Click “Save”.
Expected: The UI shows a message like “Network error: connection refused” (or the specific reason from the
NetworkError variant). The key is not stored.
8. API key storage — key is not exposed in UI after save
After a successful key save, the key value is not visible in any UI element.
Steps:
- Save a valid API key for Anthropic.
- Navigate away from Settings and return to Settings → Agent.
- Observe the API key row.
Expected: The key input field is hidden (the row shows “Configured” status, not the raw key). No element on the page
contains the actual key value. get_settings via the bridge returns api_key_configured: true but no api_key field —
the key lives only in the OS keychain.
9. API key removal clears the configured flag
Removing a stored key sets api_key_configured = false and prompts for a new key.
Steps:
- Ensure a valid API key is configured for Anthropic.
- Navigate to Settings → Agent.
- Click “Remove” (or the trash icon) on the API key row.
- Confirm if prompted.
- Call
get_settingsvia the bridge.
Expected: The bridge response shows agent.api_key_configured = false. The UI transitions to a “not configured”
state for the key row, showing a “Configure” button. No key remains in the OS keychain for the Anthropic service.
10. Proactive suggestion interval — valid value is accepted and persisted
The proactive interval slider or input accepts values between 5 and 120 minutes.
Steps:
- Navigate to Settings → Agent.
- Set the proactive suggestion interval to 15 minutes.
- Call
get_settingsvia the bridge.
Expected: The bridge response shows agent.proactive_interval_minutes = 15. The UI reflects the saved value.
11. Proactive suggestion interval — out-of-range values are clamped
Values outside the 5–120 minute range are silently clamped to the nearest boundary.
Steps:
- Call
update_agent_settingsvia the bridge (or Tauri) withproactive_interval_minutes: 2(below minimum). - Call
get_settingsto read back the stored value. - Repeat with
proactive_interval_minutes: 200(above maximum).
Expected: Reading back after step 2 shows proactive_interval_minutes: 5 (clamped to minimum). Reading back after
step 3 shows proactive_interval_minutes: 120 (clamped to maximum). The AgentSettings::set_proactive_interval method
applies clamping at the domain layer before persistence.
12. Settings persist across simulated app restart
Agent settings written in one session are available in a subsequent session (tested by reloading settings from storage).
Steps:
- Configure agent settings:
provider = "open_ai",model = "gpt-4o",proactive_interval_minutes = 45. - Reload the settings by navigating away and back to Settings → Agent (or by calling
get_settingsvia the bridge after the save has committed to disk). - Observe the loaded values.
Expected: The provider selector shows “OpenAI”. The model shows “gpt-4o”. The proactive interval shows 45 minutes. No fields have reverted to defaults. Settings are persisted as JSON atomically (temp file + rename), so they survive crashes between the write and the next read.
13. Ollama — status indicator shows running state with version
When Ollama is running on the default endpoint, the status section shows a green indicator and the version string.
Steps:
- Ensure Ollama is running on
http://localhost:11434. - Navigate to Settings → Agent and select “Ollama (Local)” as the provider.
- Observe the
[data-testid="ollama-status-section"].
Expected: A green dot and the text “Running (vX.Y.Z)” are visible in [data-testid="ollama-status-text"]. The
version string comes from OllamaStatus.version. The “Retry” button is available but the status shows success.
14. Ollama — status indicator shows not-found state with install link
When Ollama is not running, the status shows a red indicator and an install prompt.
Steps:
- Ensure Ollama is NOT running (stopped or not installed).
- Navigate to Settings → Agent and select “Ollama (Local)”.
- Observe the status section.
Expected: A red dot and the text “Not Running” are shown in [data-testid="ollama-status-text"]. The text ”—
Install Ollama from ollama.com” appears alongside the indicator. The “Retry” button is visible. No model picker is shown
(because Ollama is not running).
15. Ollama — hardware tier detection and recommended models
The model picker displays recommended models filtered by the detected hardware tier.
Steps:
- Navigate to Settings → Agent with Ollama provider selected and Ollama running.
- Observe the
[data-testid="ollama-model-picker"].
Expected: The “Recommended for your system” section lists models whose min_tier is at or below the detected
hardware tier. Models exceeding the hardware tier are not shown in the recommended section. Each listed model shows its
display name, description, and size in GB. If a model is installed, it shows an “Installed” badge
([data-testid="ollama-installed-badge-{name}"]). If it is not installed, a “Download” button
([data-testid="ollama-download-{name}"]) is shown.
16. Ollama — model download with streaming progress bar
Clicking “Download” on an uninstalled model starts a pull_ollama_model request and shows a streaming progress bar via
ollama:pull-progress events.
Steps:
- Navigate to Settings → Agent with Ollama provider selected and Ollama running.
- Find an uninstalled recommended model in the picker.
- Click its “Download” button (
[data-testid="ollama-download-{name}"]). - Observe the progress display in
[data-testid="ollama-pull-progress-{name}"].
Expected: The “Download” button disappears. A progress section appears showing: (a) the current status text (e.g.,
“downloading”), (b) a percentage label (e.g., “42%”), and (c) a filled progress bar whose width reflects the percentage.
When the download completes (status: "complete"), the progress section disappears and the model shows an “Installed”
badge. The model is automatically selected as the active model.
17. Ollama — download error shows retry button
If ollama:pull-progress emits status: "error", an error message and a retry button appear.
Steps:
- Initiate a model download that fails (simulate by providing an invalid model name or disconnecting from Ollama mid-pull).
- Observe the progress section for the model.
Expected: The progress bar is replaced by a red error message: “Error: Download failed” (or the specific error
string from the event). A “Retry” button appears, allowing the user to restart the pull. Clicking Retry clears the error
state and initiates a new pull_ollama_model request.
18. Ollama — custom endpoint URL configuration
The “Advanced” section allows changing the Ollama endpoint URL, which is persisted in agent.ollama_url.
Steps:
- Navigate to Settings → Agent with Ollama provider selected.
- Click “Advanced” (
[data-testid="ollama-advanced-toggle"]) to expand the section. - Clear the “Endpoint” input (
[data-testid="ollama-url-input"]) and typehttp://192.168.1.100:11434. - Tab out of the field (triggering the
onBlursave). - Call
get_settingsvia the bridge.
Expected: The bridge response shows agent.ollama_url = "http://192.168.1.100:11434". The field is saved on blur,
not requiring an explicit save button. The status check uses the updated URL on the next Retry press.
19. Ollama — Test Connection button validates the custom endpoint
The “Test Connection” button in the Advanced section sends check_ollama_status to the custom URL and shows the result
inline.
Steps:
- Open the Advanced section for Ollama settings.
- Change the endpoint URL to a reachable custom address.
- Click “Test Connection” (
[data-testid="ollama-test-connection"]). - Wait for the result.
Expected: While testing, the button label changes to “Testing…” and is disabled. After the check completes,
[data-testid="ollama-test-result"] appears with either “Connected (vX.Y.Z)” in green (if reachable) or “Connection
failed” in red (if not). The main status section is not affected by this test.
20. Ollama — API key row is hidden when Ollama is selected
Ollama is a keyless provider. The API key configuration row must not appear when Ollama is the active provider.
Steps:
- Select “Ollama (Local)” as the provider.
- Observe the settings form.
Expected: No “API Key” row, “Configure” button, or key-related UI element is visible. The Ollama status section and
model picker take the place of the key configuration row. api_key_configured in settings remains false for Ollama
because no key is needed.
21. OpenRouter — OAuth PKCE flow initiation
Selecting OpenRouter as provider and clicking “Connect via OpenRouter” starts the start_openrouter_auth command and
opens the authorization URL in the system browser.
Steps:
- Select “OpenRouter” as the provider.
- Observe the authentication UI (OAuth button vs. manual key input).
- Click “Connect via OpenRouter” (the primary auth path).
Expected: start_openrouter_auth is called on the backend. The system browser opens to an OpenRouter authorization
URL (containing a PKCE challenge). The UI shows an “Authorization in progress…” state. A “Cancel” button is available.
The app listens for an openrouter://auth-callback deep link to complete the flow.
22. OpenRouter — OAuth cancellation clears pending state
Clicking “Cancel” during the OpenRouter OAuth flow resets the UI without storing a key.
Steps:
- Start the OpenRouter OAuth flow (as in scenario 21).
- Click “Cancel” before completing authorization in the browser.
Expected: The UI returns to the OpenRouter configuration state with no api_key_configured = true.
agent.api_key_configured remains false. No listener for the openrouter://auth-callback event remains active.
23. Default provider when no keys are configured
When agent settings exist but no API key is configured and the provider is not Ollama, the agent starts with
StubLlmProvider and returns a “not configured” error to the user.
Steps:
- Configure agent settings with
provider: "anthropic",api_key_configured: false. - Start the agent harness via
start_agent. - Send a message to the agent.
Expected: The agent responds with a “not configured” error message in the conversation thread (text from
StubLlmProvider). No LlmError::AuthFailure is swallowed silently. The agent status does not enter an Error state —
the stub returns a Provider error that the harness handles gracefully.
24. Capability defaults — pre-configured capabilities applied on first setup
When the guided setup flow initializes agent settings, default capability grants are applied from
DEFAULT_CAPABILITIES.
Steps:
- Start from unconfigured state and complete the guided setup flow for any provider.
- Call
get_settingsvia the bridge. - Inspect
agent.capabilities.
Expected: The capabilities map contains at minimum: PagesRead: true, SearchUse: true, TagsRead: true. Write
capabilities (PagesWrite, PagesDelete, PagesOrganize, AttachmentsWrite, TagsWrite) default to false,
requiring explicit user grants. The defaults match the DEFAULT_CAPABILITIES constant in AgentSettings.tsx.
Test Data
| Key | Value | Notes |
|---|---|---|
| anthropic_provider | anthropic | Serialized value in settings JSON (snake_case from Specta) |
| openai_provider | open_ai | Serialized value — note underscore, not camelCase |
| xai_provider | xai | Serialized value |
| ollama_provider | ollama | Keyless local provider |
| openrouter_provider | open_router | Serialized value |
| default_ollama_url | http://localhost:11434 | Default Ollama endpoint, set by default_ollama_url() in domain |
| default_proactive_interval | 30 | Minutes; constant DEFAULT_PROACTIVE_INTERVAL in domain |
| min_proactive_interval | 5 | Minimum; values below are clamped to 5 |
| max_proactive_interval | 120 | Maximum; values above are clamped to 120 |
| default_capabilities_on | PagesRead, SearchUse, TagsRead | Capabilities that default to true in DEFAULT_CAPABILITIES |
| default_capabilities_off | PagesWrite, PagesDelete, etc. | Capabilities that default to false |
| ollama_status_testid | ollama-status-section | Container for the running/not-running indicator |
| ollama_status_text_testid | ollama-status-text | Span showing “Running (vX.Y.Z)” or “Not Running” |
| ollama_model_picker_testid | ollama-model-picker | Model list container |
| ollama_advanced_toggle_testid | ollama-advanced-toggle | Button that expands the custom endpoint section |
| ollama_url_input_testid | ollama-url-input | Text input for the custom Ollama endpoint |
| ollama_test_conn_testid | ollama-test-connection | Button that calls check_ollama_status against the custom URL |
| ollama_test_result_testid | ollama-test-result | Inline result span after Test Connection completes |
| keychain_service_anthropic | inklings-agent-anthropic | OS keychain service name for Anthropic keys |
| keychain_service_openai | inklings-agent-openai | OS keychain service name for OpenAI keys |
| keychain_service_openrouter | inklings-agent-openrouter | OS keychain service name for OpenRouter keys |
| settings_schema_version | 9 | Current schema version; v9 added OpenRouter |
Notes
- The HTTP bridge exposes agent settings commands including
get_settings,update_agent_settings, and the full suite of agent configuration handlers. Ollama-specific routes (check_ollama_status,get_ollama_models,pull_ollama_model,get_recommended_models,get_system_capabilities) and OpenRouter OAuth routes (start_openrouter_auth,complete_openrouter_auth) are also wired into the bridge. OS keychain operations (set_api_key,remove_api_key,validate_api_key) are exercised via the frontend UI path. - The API key is NEVER written to the settings JSON file. After
set_api_keysucceeds,get_settingsreturnsagent.api_key_configured = truebut contains noapi_keyfield. Tests must not assert the presence of the raw key in any JSON or DOM element. AgentProviderserializes insnake_case(Specta convention from Rust). The values are"anthropic","open_ai","xai","ollama","open_router". Tests that inspectget_settingsJSON must use these exact string values, not the display labels (“OpenAI”, “xAI”).- The Ollama provider does not use the
api_key_configuredpath.build_llm_providerinagent.rschecksprovider == Ollamabefore the keychain path and constructs an Ollama provider directly usingollama_urlandmodel. validate_api_keymakes a real network call to the provider’s endpoint. E2E tests relying on this command need either a live key or a mock server that returns the expected HTTP response codes (401 forInvalid, 429 forRateLimited).- The
pull_ollama_modelcommand is fire-and-forget: it returns immediately and emitsollama:pull-progressTauri events asynchronously. Progress event payloads contain{ model, status, percent, completed, total }. The final event hasstatus: "complete"orstatus: "error". - Settings are persisted atomically via
SettingsRepository.save()using a temp file + rename strategy. The bridge test should callget_settingsonly after theupdate_agent_settingscommand has returnedOkto guarantee the file has been written. - OpenRouter OAuth uses a PKCE flow (
start_openrouter_auth→ browser redirect →inklings://auth/openrouter?code=<code>deep link →complete_openrouter_auth). End-to-end testing of this flow requires a real OpenRouter account and system browser interaction. Bridge-only tests should verify the command contract forcomplete_openrouter_auth(rejects calls without a priorstart_openrouter_auth).
Was this page helpful?
Thanks for your feedback!