Skip to content
Documentation GitHub
Data Flow

Sync

How local edits flow to Supabase and how remote changes merge back into local SQLite.



Code: apps/desktop/src-tauri/src/side_effects.rs

WriteEffectCoordinator enqueues a sync entry whenever content changes. Three concern types are queued separately:

TriggerQueue Mechanism
Block content saveSyncQueueRepository::enqueue(block_id, update_bytes, version_vector)
Page metadata change (title, parent, type, icon)PageSyncOperations::get_pending_metadata pulls from a dedicated queue
Page deletionPageSyncOperations::get_pending_deletions pulls from a deletion queue

The sync queue stores Loro incremental update bytes, not the full snapshot. This keeps queue entries small and bandwidth-efficient.

Code: crates/application/src/sync/sync_engine.rs

The SyncEngine is driven by SyncManager in the framework layer. Cycles run every 5 seconds when online (sync_interval). Realtime events from Supabase WebSocket can trigger an immediate wake via a tokio::sync::Notify handle.

The engine tracks consecutive_failures for exponential backoff. When all pushes in a cycle fail, SyncStatus transitions to Offline and backoff duration doubles (capped at 60s).

Code: crates/application/src/sync/sync_engine.rspush_block_phase

The engine calls SyncQueueRepository::dequeue_batch(50) to fetch the oldest 50 pending updates. All are sent in a single push_block_updates_batch call to the remote. Per-entry results are processed:

  • Success: mark_synced(queue_id) removes the entry from the queue
  • Failure: mark_failed(queue_id) increments retry_count; after the maximum (configurable), the entry is moved to a dead letter queue

If all pushes fail, the cycle returns a SyncFailed error and the engine transitions to Offline.

Code: push_metadata_phase in sync engine

Pending metadata fields are fetched from the local queue via get_pending_metadata. Each field (e.g., title, parent_slug, page_type, icon) is pushed with its changed_at timestamp and device_id. The remote preserves these timestamps for LWW comparison on other devices.

Code: push_deletions_phase in sync engine

Pending page deletions are fetched and pushed as tombstones. Each tombstone carries the page title (for notification display on other devices) and the originating device_id.

6. Phase 4 — Pull Block Updates (CRDT Merge)

Section titled “6. Phase 4 — Pull Block Updates (CRDT Merge)”

Code: pull_phase in sync engine

The engine reads the local sync cursor (last successfully processed block_update.id) and fetches remote updates since that point. For each update:

  1. Load the current local Loro snapshot from BlockStorageRepository
  2. Call LoroMerger::apply_updates(snapshot, update_bytes) — a pure CRDT merge with no conflict resolution needed
  3. Persist the merged snapshot and its materialized text
  4. Update the sync_state table with updated version vectors
  5. Advance the cursor only after the block saves successfully

Cursor safety invariant: The cursor is never advanced past a block that failed to merge or save. This ensures re-delivery of any update that was not fully processed.

Code: pull_metadata_phase in sync engine

Remote metadata changes are fetched since the metadata cursor (ISO-8601 timestamp). For each remote field:

  1. Fetch the local field value with its changed_at and device_id
  2. If remote changed_at is newer: apply the remote value with preserved timestamps
  3. If local is newer: discard the remote value (local wins LWW)

Timestamp pass-through invariant: Remote changed_at and device_id are stored verbatim — never regenerated locally. This preserves the LWW ordering across all devices.

8. Phase 6 — Pull Deletions (Tombstone Cascade)

Section titled “8. Phase 6 — Pull Deletions (Tombstone Cascade)”

Code: pull_deletions_phase in sync engine

Remote tombstones are fetched since the deletion cursor. For each tombstone:

  1. Check if the page exists locally and whether it has local edits newer than the deletion
  2. Apply a soft-delete (cascades to all child pages via recursive CTE)
  3. Record the tombstone locally for audit/notification (noting whether there were conflicting local edits)
  4. Clean up any orphaned sync_queue entries for deleted pages

Code: crates/application/src/sync/services.rsRealtimeSubscriptionProvider

SupabaseRealtimeProvider subscribes to the workspace’s Supabase Realtime channel. When the remote signals a change, notify_one() is called on the engine’s Notify handle. The engine’s polling loop wakes immediately and runs a pull cycle without waiting for the 5-second interval.

The provider debounces rapid-fire events before signalling to prevent thrash during bulk operations.


FailureBehavior
All block pushes failSyncStatus::Offline; exponential backoff (doubles, capped at 60s)
Single block push failsmark_failed; retry on next cycle; dead-letter after max retries
CRDT merge failsBlock skipped; cursor NOT advanced; re-delivered next cycle
Metadata push failsField stays in queue; retried next cycle
Deletion push failsTombstone stays in queue; retried next cycle
LWW conflict (same timestamp)Higher device_id (lexicographic) wins (deterministic tie-breaker)
Deletion conflict (local edits)Deletion applied; had_local_edits=true recorded in tombstone
Realtime disconnectedFalls back to polling interval; reconnect attempted by provider
Not authenticatedSync cycle returns NotAuthenticated; manager logs and skips

  • Sync System — Full sync system architecture, queue schema, and state machine details
  • Write Path — How block saves enqueue entries into the sync queue
  • Auth System — Token management required before any sync operation
  • Event Log System — Sync events are recorded alongside content events

Was this page helpful?