Rust Async Lock Anti-Patterns: Mutex Across Await and Token Refresh TOCTOU
Rust Async Lock Anti-Patterns
Problem 1: Holding tokio::Mutex Across Network I/O
Symptoms
- UI freezes when checking sync status during manual sync
- Background sync loop blocked for 1-5 seconds
get_sync_status()hangs untilforce_synccompletes
Anti-Pattern
// BAD: Lock held for entire sync cycle (1-5 seconds of network I/O)pub async fn force_sync(engine: Arc<tokio::sync::Mutex<SyncEngine>>) { let mut eng = engine.lock().await; eng.sync_cycle(path, &cloud_id, device_id).await // Lock held during network calls}Root Cause
tokio::sync::Mutex is designed for use across .await points, but holding it for long-duration operations (network
I/O, disk I/O) blocks all other code that needs the same lock. Unlike std::sync::Mutex which would panic if held
across await, tokio’s Mutex silently degrades performance.
Solution
Option A: Wake the background loop instead of running directly
// GOOD: Wake the background loop, return immediatelypub fn force_sync(manager: &SyncManager) -> Result<(), Error> { manager.trigger_pull(); // Notify::notify_one() — non-blocking Ok(())}Option B: Clone state, release lock, then operate
// GOOD: Clone what you need, release lock, then do I/Opub async fn force_sync(engine: Arc<tokio::sync::Mutex<SyncEngine>>) { let (path, cloud_id, device_id) = { let guard = engine.lock().await; (guard.path.clone(), guard.cloud_id.clone(), guard.device_id) }; // Lock released here
// Run cycle without holding the lock run_sync_cycle(path, cloud_id, device_id).await}Problem 2: Token Refresh Time-Of-Check-to-Time-Of-Use (TOCTOU) Race Condition
Symptoms
- User randomly signed out under concurrent auth operations
- “Invalid refresh token” errors in logs
- Token refresh fails when realtime client and sync manager refresh simultaneously
Anti-Pattern
// BAD: Read-release-network-reacquire pattern with shared mutable tokenasync fn refresh_session(&self) -> Result<String> { let refresh_token = { let session = self.session.read()?; session.refresh_token.clone()? // Read token }; // Lock released
// Two threads can reach here with the SAME refresh token let resp = self.http.post("/token") .body(json!({ "refresh_token": refresh_token })) .send().await?; // First caller succeeds, invalidates token
// Second caller fails — token already consumed by first caller self.store_session(&resp.access_token, &resp.refresh_token).await?; Ok(resp.access_token)}Root Cause
OAuth refresh tokens are single-use. When two callers read the same refresh token before either completes the refresh, the second refresh fails because the first already consumed (invalidated) the token.
Solution
Serialize refresh attempts with a dedicated mutex:
pub struct AuthRepository { session: RwLock<SessionStore>, refresh_lock: tokio::sync::Mutex<()>, // Serializes refresh attempts}
async fn refresh_session(&self) -> Result<String> { let _guard = self.refresh_lock.lock().await; // Only one refresh at a time
// Check if another caller already refreshed while we waited let refresh_token = { let session = self.session.read()?; session.refresh_token.clone()? };
let resp = self.http.post("/token") .body(json!({ "refresh_token": refresh_token })) .send().await?;
self.store_session(&resp.access_token, &resp.refresh_token).await?; Ok(resp.access_token)}The second caller waits for the first to complete, then reads the already-refreshed token.
Prevention
Best Practices
- Never hold a lock across network I/O. If you need a lock and network access, clone what you need, release the lock, then do I/O.
- Serialize single-use token operations. Any operation that consumes a token (refresh, one-time codes) must be serialized to prevent TOCTOU.
- Prefer
tokio::sync::Notifyover Mutex for signaling. When the goal is “wake another task,” use Notify, not Mutex. - Use
parking_lot::RwLockfor read-heavy state. Sync status, connection state, and other frequently-read values benefit from lock-free reads.
Warning Signs
engine.lock().awaitfollowed by.awaiton a network/IO call inside the same guard scopesession.read()→ drop lock → network call →session.write()without serialization- “Intermittent auth failures” or “random sign-outs” in production logs
- UI that freezes during background operations
Audit Pattern
Search for these in async Rust code:
# Find locks held across awaitrg "\.lock\(\)\.await" --type rust -A 5 | grep "\.await"
# Find read-release-write patterns on shared staterg "session\.read\(\)" --type rust -A 10 | grep "session\.write\(\)"References
- INK-193: Fix token refresh race condition
- INK-194: Fix lock held across await in force_sync
- Commits:
921c14c,e0ac2c0
React Hooks exhaustive-deps Fix Patterns Next
Trait Default Method Missing Override: Silent No-Op on Data Mutation
Was this page helpful?
Thanks for your feedback!