Sync
How local edits flow to Supabase and how remote changes merge back into local SQLite.
Overview
Section titled “Overview”Step-by-Step Details
Section titled “Step-by-Step Details”1. Push Trigger
Section titled “1. Push Trigger”Code: apps/desktop/src-tauri/src/side_effects.rs
WriteEffectCoordinator enqueues a sync entry whenever content changes. Three concern types are queued separately:
| Trigger | Queue Mechanism |
|---|---|
| Block content save | SyncQueueRepository::enqueue(block_id, update_bytes, version_vector) |
| Page metadata change (title, parent, type, icon) | PageSyncOperations::get_pending_metadata pulls from a dedicated queue |
| Page deletion | PageSyncOperations::get_pending_deletions pulls from a deletion queue |
The sync queue stores Loro incremental update bytes, not the full snapshot. This keeps queue entries small and bandwidth-efficient.
2. Sync Cycle Scheduling
Section titled “2. Sync Cycle Scheduling”Code: crates/application/src/sync/sync_engine.rs
The SyncEngine is driven by SyncManager in the framework layer. Cycles run every 5 seconds when online
(sync_interval). Realtime events from Supabase WebSocket can trigger an immediate wake via a tokio::sync::Notify
handle.
The engine tracks consecutive_failures for exponential backoff. When all pushes in a cycle fail, SyncStatus
transitions to Offline and backoff duration doubles (capped at 60s).
3. Phase 1 — Push Block Updates
Section titled “3. Phase 1 — Push Block Updates”Code: crates/application/src/sync/sync_engine.rs — push_block_phase
The engine calls SyncQueueRepository::dequeue_batch(50) to fetch the oldest 50 pending updates. All are sent in a
single push_block_updates_batch call to the remote. Per-entry results are processed:
- Success:
mark_synced(queue_id)removes the entry from the queue - Failure:
mark_failed(queue_id)incrementsretry_count; after the maximum (configurable), the entry is moved to a dead letter queue
If all pushes fail, the cycle returns a SyncFailed error and the engine transitions to Offline.
4. Phase 2 — Push Metadata
Section titled “4. Phase 2 — Push Metadata”Code: push_metadata_phase in sync engine
Pending metadata fields are fetched from the local queue via get_pending_metadata. Each field (e.g., title,
parent_slug, page_type, icon) is pushed with its changed_at timestamp and device_id. The remote preserves
these timestamps for LWW comparison on other devices.
5. Phase 3 — Push Deletions
Section titled “5. Phase 3 — Push Deletions”Code: push_deletions_phase in sync engine
Pending page deletions are fetched and pushed as tombstones. Each tombstone carries the page title (for notification
display on other devices) and the originating device_id.
6. Phase 4 — Pull Block Updates (CRDT Merge)
Section titled “6. Phase 4 — Pull Block Updates (CRDT Merge)”Code: pull_phase in sync engine
The engine reads the local sync cursor (last successfully processed block_update.id) and fetches remote updates since
that point. For each update:
- Load the current local Loro snapshot from
BlockStorageRepository - Call
LoroMerger::apply_updates(snapshot, update_bytes)— a pure CRDT merge with no conflict resolution needed - Persist the merged snapshot and its materialized text
- Update the
sync_statetable with updated version vectors - Advance the cursor only after the block saves successfully
Cursor safety invariant: The cursor is never advanced past a block that failed to merge or save. This ensures re-delivery of any update that was not fully processed.
7. Phase 5 — Pull Metadata (LWW Merge)
Section titled “7. Phase 5 — Pull Metadata (LWW Merge)”Code: pull_metadata_phase in sync engine
Remote metadata changes are fetched since the metadata cursor (ISO-8601 timestamp). For each remote field:
- Fetch the local field value with its
changed_atanddevice_id - If remote
changed_atis newer: apply the remote value with preserved timestamps - If local is newer: discard the remote value (local wins LWW)
Timestamp pass-through invariant: Remote changed_at and device_id are stored verbatim — never regenerated
locally. This preserves the LWW ordering across all devices.
8. Phase 6 — Pull Deletions (Tombstone Cascade)
Section titled “8. Phase 6 — Pull Deletions (Tombstone Cascade)”Code: pull_deletions_phase in sync engine
Remote tombstones are fetched since the deletion cursor. For each tombstone:
- Check if the page exists locally and whether it has local edits newer than the deletion
- Apply a soft-delete (cascades to all child pages via recursive CTE)
- Record the tombstone locally for audit/notification (noting whether there were conflicting local edits)
- Clean up any orphaned
sync_queueentries for deleted pages
9. Realtime Wake
Section titled “9. Realtime Wake”Code: crates/application/src/sync/services.rs — RealtimeSubscriptionProvider
SupabaseRealtimeProvider subscribes to the workspace’s Supabase Realtime channel. When the remote signals a change,
notify_one() is called on the engine’s Notify handle. The engine’s polling loop wakes immediately and runs a pull
cycle without waiting for the 5-second interval.
The provider debounces rapid-fire events before signalling to prevent thrash during bulk operations.
Error Handling
Section titled “Error Handling”| Failure | Behavior |
|---|---|
| All block pushes fail | SyncStatus::Offline; exponential backoff (doubles, capped at 60s) |
| Single block push fails | mark_failed; retry on next cycle; dead-letter after max retries |
| CRDT merge fails | Block skipped; cursor NOT advanced; re-delivered next cycle |
| Metadata push fails | Field stays in queue; retried next cycle |
| Deletion push fails | Tombstone stays in queue; retried next cycle |
| LWW conflict (same timestamp) | Higher device_id (lexicographic) wins (deterministic tie-breaker) |
| Deletion conflict (local edits) | Deletion applied; had_local_edits=true recorded in tombstone |
| Realtime disconnected | Falls back to polling interval; reconnect attempted by provider |
| Not authenticated | Sync cycle returns NotAuthenticated; manager logs and skips |
Related
Section titled “Related”- Sync System — Full sync system architecture, queue schema, and state machine details
- Write Path — How block saves enqueue entries into the sync queue
- Auth System — Token management required before any sync operation
- Event Log System — Sync events are recorded alongside content events
Was this page helpful?
Thanks for your feedback!