Development Guide
Last Updated: March 2026 Context: Rust/Tauri desktop application with React frontend
Development Philosophy
Vertical Slicing
All feature development follows vertical slicing methodology: implement complete user stories end-to-end through all architectural layers before moving to the next feature.
Principle: Build by story, not by layer.
Why Vertical Slicing?
Vertical (Story-First) vs Horizontal (Layer-First)
Vertical Slicing:
Story 1: "User can view a page"✅ Domain → Application → Tauri Commands → Frontend✅ Deploy Story 1 → Get feedback → Story 2
Result: Working feature every iterationHorizontal Slicing (Avoid):
Sprint 1: Build all entitiesSprint 2: Build all repositories ❌ Can't test yetSprint 3: Build all use cases ❌ Still can't testSprint 4: Build all commands ❌ Integration issues discovered lateSprint 5: Fix integration ❌ ReworkBenefits
- Early Integration Detection: Discover interface mismatches immediately
- Incremental User Value: Each slice is demo-able and potentially shippable
- Reduced Work-in-Progress: Finish one thing before starting another
- Faster Feedback Cycles: Validate early
- Risk Reduction: Integration isn’t a “big bang” at the end
For a worked example of vertical slicing, see Your First Feature.
Systems-First Principle
When identifying a need or use case, translate it into the simplest system that meets the need and aligns with long-term vision.
| Approach | Example | Result |
|---|---|---|
| Feature-first (avoid) | “We need Character Management” | Character-specific code |
| Systems-first (preferred) | “We need a Template System” | Reusable infrastructure |
The specific use case (Character Template) becomes how we verify the system works, not the system itself.
Why This Matters
- Reusability: Template System serves characters, locations, factions, events—not just characters
- Coherent Architecture: Systems compose cleanly; features accumulate complexity
- Long-term Vision: Systems align with product direction; features solve immediate problems
Application
When scoping work:
- Identify the need: “Users need to create characters with rich layouts”
- Find the system: “This requires a Template System with layout support”
- Define verification: “Character Template proves the system works”
- Scope MVP system: Simplest Template System that enables the verification use case
Implementation Patterns
Pattern 1: Entity + Read Infrastructure
Use When: Implementing the first instance of a new aggregate root (an aggregate root is the top-level entity that owns and coordinates access to a cluster of related domain objects — a Domain-Driven Design concept).
Layers:
- Domain (
crates/domain/src/): Entity struct with validation - Application (
crates/application/src/): Use case + service abstractions - Infrastructure (
crates/infrastructure/sqlite/src/): Storage implementation - Framework (
apps/desktop/src-tauri/): Tauri command - Frontend (
apps/desktop/src-react/): React component
Acceptance Criteria:
- Can create entity via test data
- Can fetch entity via Tauri command
- Frontend displays entity
- All tests passing
Pattern 2: Dependent Entity System
Use When: Adding content or details to an existing aggregate root.
Example: Adding Blocks to Pages
Before:
{ "id": "123", "title": "Chapter 1" }After:
{ "id": "123", "title": "Chapter 1", "blocks": [ { "id": "b1", "content": "It was a dark and stormy night..." } ]}Pattern 3: Full CRUD Operation
Use When: Adding Create, Update, or Delete to existing entities.
Layers:
- Domain: Entity methods (e.g.,
update_title()) - Application: New use case with request/response types
- Infrastructure: Storage method
- Framework: Tauri command with proper error handling
- Frontend: UI for the operation
File-Per-Use-Case Pattern
Each bounded context (a bounded context is a logical boundary within which a particular domain model applies consistently — a Domain-Driven Design concept) gets its own module with use cases and service abstractions:
crates/application/src/├── workspace/│ ├── mod.rs # Re-exports│ ├── services.rs # WorkspaceRepository trait + errors│ └── initialize.rs # InitializeWorkspaceUseCase├── page/│ ├── mod.rs # Re-exports│ ├── services.rs # PageRepository trait + errors│ ├── get.rs # GetPageUseCase│ ├── create.rs # CreatePageUseCase│ └── update.rs # UpdatePageUseCaseWhy?
- Cohesion: Use cases and their service abstractions colocated
- Low Coupling: Changes to one context don’t affect others
- Clear Boundaries: Each bounded context is independent
- Easy Testing: Mock only what this use case needs
Quality Checkpoints
Before marking a vertical slice as complete:
- Functionality: Works end-to-end (can demo)
- All Layers: Domain, Application, Infrastructure, Framework, Frontend
- Tests: Unit tests at each layer, integration test for full flow
- Dependencies: Flow inward only (Framework → Application → Domain)
- Code Quality:
cargo clippy,cargo fmt, TypeScript checks pass - Committed: Changes committed to git with descriptive message
- Linear Updated: Associated issue marked Done
Definition of Done: User can perform the story’s action in the app, code is committed, and Linear is updated.
Additional Checklist for Data-Mutating Commands
For any command that modifies workspace state, verify the following before marking done:
- Permission Guard:
guard.require(Capability::Xxx)is called in every use case that creates, updates, or deletes data. Usestate.resolve_owner_guard()?in the Tauri command to produce the guard. - Event Log:
WriteEffectCoordinatorrecords events for all mutations to pages, blocks, tags, and attachments. Non-structural operations (bookmarks, settings) do not require event log entries. - Side Effects: Verify that mutating operations notify downstream subsystems via
WriteEffectCoordinator: sync queue entry created, embedding pipeline triggered (for text changes). - Production Readiness: Review the new code against the checklist in
docs/solutions/patterns/production-readiness-review-checklist.md. Key anti-patterns: unbounded channels, per-row transactions, unbounded result sets, blockingDrop, per-request reconstruction of expensive objects, missing input validation at system boundaries.
Commit Discipline for Feature Phases
When implementing large features (epics with multiple stories), follow this commit workflow:
Commit After Each Story
DO NOT batch all commits at the end of a feature phase. DO commit each story independently as it completes.
Epic: Import System├── Import markdown folder│ └── commit: feat(import): implement markdown folder import├── Import Obsidian vault│ └── commit: feat(import): add wiki-link conversion├── Preview and conflicts│ └── commit: feat(import): add preview and conflict resolution└── Progress reporting └── commit: feat(import): add progress reportingWhy This Matters
- Incremental Progress Tracking: Git history shows clear progression
- Risk Minimization: Smaller commits = less work lost if issues arise
- Clear Attribution: Each story has its own commit for traceability
- Easier Debugging: Bisect and blame work effectively
- Better Collaboration: Others can see and review work incrementally
Commit Message Convention
<type>(<scope>): <description>
Types: feat, fix, docs, style, refactor, test, choreScope: module or feature area
Example:feat(import): implement markdown folder importWorkflow Per Story
- Implement the story end-to-end (all layers)
- Test thoroughly (
cargo test,pnpm typecheck) - Format code (
cargo fmt, lint checks) - Commit with descriptive message referencing the story
- Move to next story
Anti-Pattern: Batched Commits
❌ Bad: Work on all 4 stories, then make 1 giant commit at the end - Risk of losing work - No incremental progress visible - Hard to review - Hard to debug if issues arise
✅ Good: Complete story → commit → update Linear → next story - Clear progress - Easy rollback if needed - Better git historyPython Sidecar Development
The Python sidecar (apps/python-sidecar/) is a separate Python project managed by uv. It runs as a long-lived process
that executes DSPy skill templates on behalf of the Rust agent harness.
Setup
cd apps/python-sidecaruv sync --group dev # Install runtime + dev dependenciesDevelopment commands
uv run pytest # Run tests (coverage enabled by default)uv run ruff check . # Lintuv run ruff format --check . # Format checkuv run mypy src/ # Strict type checkingBuilding the binary
The sidecar ships as a self-contained PyInstaller binary (~70 MB) that bundles Python 3.13 + all dependencies:
./tools/dev/build-python-sidecar.shThe binary is placed at apps/desktop/src-tauri/binaries/inklings-py and bundled into the Tauri release via
externalBin in tauri.conf.json.
Key files
| Path | Purpose |
|---|---|
src/inklings/dspy/dispatcher.py | Request routing (health_check, execute, optimize, manifest, configure) |
src/inklings/dspy/execute.py | Template execution handler |
src/inklings/dspy/lm_state.py | InklingsLM adapter — routes LLM calls back to Rust via IPC |
src/inklings/dspy/templates/ | Shipped DSPy template implementations |
src/inklings/dspy/templates/registry.py | Template lookup by template_id |
main.py | Entry point — async stdin/stdout JSON-RPC loop |
See Adding a DSPy Template for how to add new templates, and the Python Sidecar IPC Reference for the JSON-RPC protocol specification.
When to Deviate
Acceptable Horizontal Slices
- Infrastructure Setup: Tauri project initialization, tooling configuration
- Foundation Components: App shell, theme setup, navigation scaffolding
Justification Required: Explain why vertical isn’t possible.
Refactoring Strategy
After 2-3 Slices: Identify Patterns
Wait for code duplication across use cases, then extract abstractions.
Rule: Wait for 3 instances before abstracting. Don’t prematurely optimize.
Testing Strategy
Test Pyramid
/\ / \ E2E Tests (~5%) / \ - Complete user workflows / \ - Happy paths only /--------\ / \ / Integration \ (~25%) / Tests \ - Tauri commands + storage / \/------------------\/ Unit Tests \ (~70%)/--------------------\ - Domain logic, pure functionsDistribution Rationale
70% Unit Tests:
- Domain logic is pure (no external dependencies)
- Fast feedback loop (milliseconds)
- Easy to write and maintain
25% Integration Tests:
- Verify Tauri commands work with real storage
- Test SQLite operations and query correctness
- Moderate speed (seconds)
5% E2E Tests:
- Validate critical user flows
- Smoke tests for releases
- Slow (seconds to minutes)
Layer-Specific Testing
Domain Layer (Rust)
Strategy: Pure unit tests, no mocks needed
What to Test:
- Entity validation rules
- Business logic correctness
- Invariant enforcement (see domain-rules.md)
#[test]fn new_page_has_initial_block() { let page = Page::new("Test Page"); assert_eq!(page.blocks.len(), 1); assert_eq!(page.blocks[0].slot_id, 1); // slot_id: ordinal position of block within its parent page, determining display order}Coverage Goal: 100% (pure logic, no reason not to)
Application Layer (Rust)
Strategy: Unit tests with mocked repositories
What to Test:
- Orchestration logic
- Error handling
- Business rule enforcement
#[test]fn create_page_creates_initial_block() { let mock_repo = MockPageRepository::new(); let use_case = CreatePageUseCase::new(mock_repo);
let result = use_case.execute(CreatePageRequest { title: "Test Page".to_string(), ..Default::default() });
assert!(result.is_ok()); assert_eq!(result.unwrap().blocks.len(), 1);}Coverage Goal: >90%
Infrastructure Layer (Rust)
Strategy: Integration tests with real SQLite databases (temp directories)
What to Test:
- SQLite storage operations
- Query correctness and migration integrity
- Repository trait implementations
#[test]fn can_save_and_load_page() { let temp_dir = tempdir().unwrap(); let db = WorkspaceDatabase::open(temp_dir.path()).unwrap(); let repo = SqlitePageRepository;
let page = Page::new("Test Page"); repo.save(&db, &page).unwrap();
let loaded = repo.get_by_id(&db, page.id).unwrap(); assert_eq!(loaded.unwrap().title, "Test Page");}Coverage Goal: >80%
Tauri Commands (Integration)
Strategy: Test command handlers with real dependencies
#[tokio::test]async fn get_page_command_returns_page() { let app = setup_test_app().await; let page = create_test_page(&app).await;
let result: Page = app.invoke("get_page", &GetPageArgs { id: page.id }).await;
assert_eq!(result.title, page.title);}Frontend (TypeScript/React)
Strategy: Component tests with mocked Tauri commands
import { render, screen } from '@testing-library/react';import { mockIPC } from '@tauri-apps/api/mocks';
test('PageView displays page title', async () => { mockIPC((cmd) => { if (cmd === 'get_page') { return { id: '123', title: 'Test Page', blocks: [] }; } });
render(<PageView pageId="123" />);
expect(await screen.findByText('Test Page')).toBeInTheDocument();});Test Organization
The codebase uses the sibling tests.rs submodule pattern for test organization. This keeps production code files
focused and navigable while retaining full access to private items.
Pattern
// In source file: declaration only#[cfg(test)]mod tests;
// In tests.rs (sibling file):use super::*;// ... test functions, helpers, fixturesFile Layout
For foo.rs:
module/├── foo.rs # Production code + #[cfg(test)] mod tests;└── foo/ └── tests.rs # Test code: use super::*;For foo/mod.rs:
foo/├── mod.rs # Production code + #[cfg(test)] mod tests;└── tests.rs # Test code: use super::*;When to Use Each Pattern
| Scenario | Pattern |
|---|---|
| File total > ~300 LOC | Sibling tests.rs (default) |
| Test LOC > production LOC | Sibling tests.rs |
| File < 200 LOC with simple tests | Inline mod tests { } acceptable |
| Shared test helpers (substantial, reused) | Separate test_helpers.rs |
Cross-layer integration tests with real SQLite live in tests/core/tests/:
tests/core/tests/ # Cross-layer tests with real SQLite├── page_lifecycle.rs├── workspace_lifecycle.rs├── tag_tests.rs├── layout_lifecycle.rs├── rename_with_links.rs├── llm_integration.rs # LLM integration tests (all #[ignore])└── ...LLM Integration Tests (#[ignore])
Tests that require external infrastructure (Ollama, real API keys) are marked #[ignore] and live in
tests/core/tests/llm_integration.rs. They are never run in CI — they exist for developer verification on local
machines.
# Requires Ollama running on localhost:11434 with qwen3:4b pulledcargo test -p core-tests --test llm_integration -- --ignoredEach test calls OllamaTestProvider::try_new() at the start. If Ollama is not reachable the test skips — it does not
fail.
See running-tests.md for the full setup guide.
Test Naming Convention
Pattern: test_[what]_[condition]_[expected]
Examples:
test_page_requires_titletest_get_page_returns_not_found_for_missingtest_create_page_enforces_depth_limit
Coverage Goals
| Layer | Target | Rationale |
|---|---|---|
| Domain | 100% | Pure logic, no excuse |
| Application | >90% | Orchestration critical |
| Infrastructure | >80% | Some I/O edge cases OK |
| Commands | >70% | Focus critical paths |
| Overall | >85% | Confidence in changes |
Testing Business Rules
Every invariant in domain-rules.md MUST have tests:
// Rule 1: Min 1 block per page#[test]fn create_page_creates_initial_block() { ... }
#[test]fn cannot_delete_last_block() { ... }
// Rule 2: Cycle prevention#[test]fn cannot_move_page_to_descendant() { ... }Anti-Patterns to Avoid
1. Technology-First Slicing
❌ Sprint 1: Set up all crates → Sprint 2: Create all entities → … ✅ Sprint 1: “User can view a page” (all layers)
2. Layer-Specific Sprints
❌ “Frontend Sprint” → “Backend Sprint” ✅ “Page Viewing Sprint” (frontend + backend together)
3. Shared Request/Response Models
❌ One giant UpdateRequest with 20 optional fields ✅ Specific UpdatePageTitleRequest, MovePageRequest
Key Principle: If you can’t demo it, you haven’t finished it. Every vertical slice should be potentially shippable.
Feature Flags
The desktop app (apps/desktop/src-tauri/Cargo.toml) uses Cargo feature flags to control optional functionality:
| Flag | Default | Description |
|---|---|---|
custom-protocol | enabled | Required for Tauri production builds (asset serving). |
embeddings | enabled | ONNX Runtime semantic search. Disable with --no-default-features for faster compilation during UI-only development. |
To build without embeddings (faster compile, no ONNX download):
cd apps/desktop/src-tauri && cargo build --no-default-features --features custom-protocolSchema Migrations
Overview
Inklings uses SQLite databases with rusqlite_migration for schema versioning. There are two types of databases:
- Global database: Settings and recent workspaces (stored in app settings directory)
- Workspace database: Per-workspace storage for pages, blocks, and metadata (stored in
{workspace}/.inklings/inklings.db)
Migration code is located in crates/infrastructure/sqlite/src/migrations/mod.rs.
Adding a New Migration
1. Define the Migration SQL
The consolidated V001 baseline contains the full current schema. To add the first incremental migration (V002), define a new constant:
/// Workspace database schema v002 - Add example column (first incremental after V001 baseline)const WORKSPACE_V002: &str = r#"-- Add new column to pagesALTER TABLE pages ADD COLUMN example_field TEXT;
-- Create any new indexesCREATE INDEX IF NOT EXISTS idx_pages_example ON pages(example_field);"#;2. Register the Migration
Add the migration to the workspace_migrations() function:
fn workspace_migrations() -> Migrations<'static> { Migrations::new(vec![ M::up(WORKSPACE_V001), // Consolidated baseline schema M::up(WORKSPACE_V002), // <-- First incremental migration ])}3. Update Version Constants
Update CURRENT_WORKSPACE_VERSION to match the new version number:
pub const CURRENT_WORKSPACE_VERSION: usize = 2; // Was 14. Add Tests
Write tests to verify the migration works correctly:
#[test]fn test_v002_migration_adds_example_field() { let mut conn = Connection::open_in_memory().unwrap(); run_workspace_migrations(&mut conn).expect("Migrations should succeed");
// Verify new column exists let result: String = conn .query_row("SELECT example_field FROM pages LIMIT 1", [], |row| row.get(0)) .unwrap_or_default(); // Column exists (query didn't fail)}Version Compatibility
Forward Compatibility
When opening a workspace created by a newer version of the app:
- The
check_workspace_compatibility()function detects this - Returns
SchemaCompatibility::TooNew { db_version, app_version } run_workspace_migrations()returnsMigrationError::IncompatibleVersion- The UI should show a user-friendly error message
Backward Compatibility
When opening a workspace created by an older version:
- Migrations run automatically on database open
- Old data is preserved and enhanced with new schema
Best Practices
DO
- Use
ALTER TABLEfor adding columns (non-destructive) - Create indexes with
IF NOT EXISTSfor idempotency - Update triggers carefully (drop and recreate if needed)
- Test migrations on real databases before releasing
DON’T
- Delete data without backup/migration path
- Remove columns without deprecation period
- Implement downgrade migrations (one-way only)
- Change column types directly (create new column, migrate data, drop old)
Entity Conventions: ref_code
New entities that will be addressable from outside the app (via deep link, MCP tool, or share URL) should include a
ref_code field:
- Add
ref_code: String(orpub ref_code: RefCode) to the domain entity struct. - Initialize with
RefCode::generate()(fromcrates/domain/src/identifiers.rs) in the constructor. - Add to the migration SQL:
ALTER TABLE my_table ADD COLUMN ref_code TEXT NOT NULL DEFAULT '';UPDATE my_table SET ref_code = hex(randomblob(8)) WHERE ref_code = '';CREATE UNIQUE INDEX idx_my_table_ref_code ON my_table(ref_code) WHERE ref_code != '';
The partial index (WHERE ref_code != '') prevents false uniqueness violations during backfill. See
apps/codex/src/content/docs/architecture/identifier-strategy.mdx for the full three-tier identifier model.
Entities used purely as junction tables (e.g., page_tags, type_property_refs) do not need a ref_code.
Testing Migrations
# Run migration tests specificallycargo test -p infrastructure-sqlite migrations
# Run all infrastructure testscargo test -p infrastructure-sqlite
# Verify with a real database./tools/dev/reset-app-data.sh # Start freshpnpm desktop:dev # App will run migrations on startupTroubleshooting
”Database version is newer than app version”
The workspace was created or modified by a newer version of Inklings. Update the app to the latest version.
”Migration failed”
Check the specific error message. Common causes:
- Disk full
- Permission denied
- Corrupted database file
If a migration fails partway through, the database may be in an inconsistent state. Restore from backup if available, or
delete {workspace}/.inklings/inklings.db to recreate (loses page data).
MCP Server: Adding New Tools
When adding new Tauri commands, evaluate whether they should also be exposed as MCP tools for AI writing agents.
Checklist
- Evaluate: Does this command expose workspace data or modify workspace state that an AI writing tool would benefit from?
- Scope check: Read-only data access → MCP discovery/read tool. Write operations → MCP write tool (with
destructiveHinton deletes). - If in scope: Add tool handler to
src-tauri/src/mcp/tools/{discovery,read,write}.rs, add parameter struct +#[tool]method toInklingsServiceinserver.rs, add permission guard resolution. - If out of scope: Document why (e.g., “settings commands are user-facing only, not relevant to agent workflows”).
- Test: Verify tool appears in MCP tool list, parameter schema is correct.
Out-of-Scope Examples
-
Settings commands (
get_settings,set_mcp_enabled, etc.) — user-facing configuration, not relevant to agent workflows. -
Import commands — one-time user-initiated operations.
-
Auth/sync commands — infrastructure-level operations.
MCP Server Lifecycle
The MCP server starts automatically when a workspace is opened (if mcp_enabled is true in settings) and stops when the
workspace is closed or the app shuts down. Configuration is in crates/domain/src/settings.rs (mcp_enabled,
mcp_port). Server lifecycle is managed in src-tauri/src/commands/workspace.rs (start_mcp_server). See
ADR-007: Agent Integration via MCP and Sync for the full design
rationale.
See Release Playbook for the end-to-end release and distribution process.
Was this page helpful?
Thanks for your feedback!