Add an MCP Tool
Add a new tool to the in-process MCP server so AI writing assistants can call it.
Goal
Expose a new workspace operation as an MCP tool callable by clients such as Claude Desktop or Cursor.
Prerequisites
- Familiarity with the Clean Architecture layers (domain → application → infrastructure → framework)
- The operation you want to expose must already exist as an application use case with a Tauri command, or you must add one first
- The
rmcpcrate is already inCargo.tomlforapps/desktop/src-tauri
How the MCP server is structured
apps/desktop/src-tauri/src/mcp/├── mod.rs # McpError type + PageRepositoryError conversion├── auth.rs # Bearer token middleware├── resources.rs # MCP resource handlers (inklings://... URIs)├── server.rs # InklingsService + tool router + McpServer lifecycle├── state.rs # McpState (Arc refs extracted from AppState)└── tools/ ├── mod.rs # Re-exports discovery, read, write modules ├── discovery.rs # search, get_page_tree, get_backlinks, get_outgoing_links ├── read.rs # read_page └── write.rs # create_page, update_page_content, update_page_metadata, # move_page, rename_page, delete_pageEvery tool follows the same three-part pattern:
- A sync worker function in
tools/<category>.rsthat calls use cases viaMcpState - A parameter struct in
server.rsthat derivesJsonSchemaandDeserialize - An async method on
InklingsServiceinserver.rsannotated with#[tool(...)]that spawns the worker viaspawn_blocking
Steps
Step 1: Write the worker function
Add the sync logic to the appropriate file in tools/. Use an existing tool as the template. The worker receives a
&McpState reference and returns Result<CallToolResult, McpError>.
// In tools/read.rs (or a new category file)
/// Get a summary of tag usage across all pages.pub fn get_tag_summary(state: &McpState) -> Result<CallToolResult, McpError> { tracing::info!(tool = "get_tag_summary", "mcp_tool_invoked"); let _workspace = state.require_workspace()?; let guard = state.resolve_owner_guard()?;
let use_case = application::ListTagsUseCase::new(state.tag_repository.clone()); let tags = use_case.execute(&guard) .map_err(|e| McpError::Internal(e.to_string()))?;
#[derive(Serialize)] struct TagSummary { name: String, page_count: u32, }
let results: Vec<TagSummary> = tags .into_iter() .map(|t| TagSummary { name: t.name, page_count: t.page_count }) .collect();
let json = serde_json::to_string(&results) .map_err(|e| McpError::Internal(e.to_string()))?;
Ok(CallToolResult::success(vec![Content::text(json)]))}Key points:
- Always call
state.require_workspace()?first — it returnsMcpError::NoWorkspaceif no workspace is open - Always call
state.resolve_owner_guard()?before any use case — the guard carries capabilities - Return data as JSON text via
Content::text(json). Serialize to a flat struct, not a raw domain type
Step 2: Add a parameter struct (if needed)
Tools with no parameters (like get_page_tree) need no parameter struct. Tools with parameters need a struct in
server.rs that derives both JsonSchema and Deserialize. The #[schemars(description = "...")] attribute becomes
the field description visible to AI clients.
// In server.rs, near the other parameter structs
#[derive(Debug, Deserialize, schemars::JsonSchema)]struct GetTagSummaryParams { /// Optional tag group name to filter by. #[schemars(description = "Tag group name to filter (omit for all tags)")] group: Option<String>,}For tools that reuse an existing parameter shape (e.g., slug-only tools), reuse SlugParams rather than creating a
duplicate.
Step 3: Add the tool method to InklingsService
Add the async method inside the #[tool_router] impl block in server.rs. The method must:
- Be annotated with
#[tool(name = "...", description = "...")] - Accept
Parameters<YourParams>(or nothing for zero-parameter tools) - Clone state, spawn a blocking task, map both error types
// In server.rs, inside #[tool_router] impl InklingsService
#[tool( name = "get_tag_summary", description = "List all tags in the workspace with page counts. \ Optionally filter by tag group.")]async fn get_tag_summary( &self, Parameters(params): Parameters<GetTagSummaryParams>,) -> Result<CallToolResult, ErrorData> { let span = tracing::info_span!("mcp_request", tool = "get_tag_summary"); let _guard = span.enter(); let state = self.state.clone(); tokio::task::spawn_blocking(move || { tools::read::get_tag_summary(&state, params.group.as_deref()) }) .await .map_err(|e| ErrorData::internal_error(e.to_string(), None))? .map_err(|e| mcp_error_to_error_data(&e))}The spawn_blocking wrapper is required because the underlying SQLite calls are synchronous. Never call blocking code
directly in an async context.
Step 4: Expose required repositories via McpState (if needed)
If your tool needs a repository that is not already in McpState, add it:
- In
state.rs, add the field toMcpState - In
McpState::from_app_state(), extract the Arc ref fromAppState
pub struct McpState { // ... existing fields pub tag_repository: Arc<SqliteTagRepository>,}
// In from_app_state():tag_repository: Arc::clone(&state.repos.tag_repository),Step 5: Handle McpError conversions (if needed)
If your use case returns an error type not already covered by mod.rs, add a From impl:
// In mod.rsimpl From<application::TagRepositoryError> for McpError { fn from(err: application::TagRepositoryError) -> Self { use commands::UserFacingError; let msg = err.user_message(); match &err { application::TagRepositoryError::NotFound(_) => McpError::NotFound(msg), _ => McpError::Internal(msg), } }}Verification
After implementing:
# Type-check the Rustcargo check -p inklings-desktop
# Run Rust testscargo test -p inklings-desktop
# Start the app in dev modepnpm desktop:devTo test the tool manually, get the MCP token from the app settings panel, then call the health endpoint and the tool:
# Confirm the server is runningcurl http://127.0.0.1:7862/health
# Call your toolcurl -X POST http://127.0.0.1:7862/mcp \ -H "Authorization: Bearer <your-token>" \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"get_tag_summary","arguments":{}}}'The tools/list method returns all registered tools with their schemas — useful for confirming your tool is visible.
curl -X POST http://127.0.0.1:7862/mcp \ -H "Authorization: Bearer <your-token>" \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'Common mistakes
Calling blocking code in the async method. Always use spawn_blocking. Repository methods go through SQLite and
must not run on the async executor thread.
Constructing domain types directly in the tool. Return serializable structs defined locally (like PageInfo in
read.rs), not raw domain types. Domain types may not serialize the way AI clients expect.
Forgetting require_workspace(). Without this check, a tool can panic on None when no workspace is open.
Missing #[allow(dead_code)] for proc-macro generated dispatch. The #[tool_handler] macro generates runtime
dispatch invisible to clippy. The existing #[allow(dead_code)] comment on McpError variants shows the pattern — add
targeted allows with a comment naming the macro if clippy flags new variants.
See Also
apps/desktop/src-tauri/src/mcp/— full MCP server sourcedocs/ADR/007-agent-integration-mcp-and-sync.md— ADR for MCP architecture decisionsdocs/solutions/build-errors/proc-macro-dead-code-false-positives.md— clippy false positives from rmcp macros
Was this page helpful?
Thanks for your feedback!