fix(webapp): auto-recover replication services after stream errors#3613
fix(webapp): auto-recover replication services after stream errors#3613ericallam wants to merge 2 commits into
Conversation
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (8)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (7)
📜 Recent review details⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
WalkthroughThis PR adds configurable error recovery for the runs and sessions replication services. When a logical replication stream fails (e.g., during a database failover), the system can reconnect with exponential backoff, exit to let an external supervisor restart the host, or remain stopped with logging. Environment variables control per-service strategy selection and tuning. The implementation integrates into both services' lifecycle (on error, stream start, and shutdown) and is validated through containerized integration tests that force replication stream failures. Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
6f8cc24 to
5ba46ff
Compare
5ba46ff to
bc57072
Compare
When the underlying logical-replication client errored (e.g. after a Postgres failover), the runs and sessions replication services logged the error and left the stream stopped. The host process kept running, the WAL backed up, and ClickHouse silently fell behind. Both services now run a configurable recovery strategy on stream errors, defaulting to in-process reconnect with exponential backoff so a fresh self-hosted setup heals on its own: - "reconnect" (default) re-subscribes via the existing subscribe(lastLsn) path with exponential backoff (1s -> 60s cap, unlimited attempts), which re-validates the publication, re-acquires the leader lock, and resumes from the last acknowledged LSN. - "exit" calls process.exit after a short flush window so a host's supervisor (Docker restart=always, systemd, k8s, etc.) can replace the process. - "log" preserves the historical behaviour. Per-service strategy + exit knobs are env-driven via RUN_REPLICATION_ERROR_STRATEGY / SESSION_REPLICATION_ERROR_STRATEGY plus matching *_EXIT_DELAY_MS / *_EXIT_CODE. Reconnect tuning is shared across both services via REPLICATION_RECONNECT_INITIAL_DELAY_MS / _MAX_DELAY_MS / _MAX_ATTEMPTS (0 = unlimited).
Addresses PR review feedback:
- LogicalReplicationClient.subscribe() can throw before its internal
"error" listener is wired up (notably when pg client.connect() fails
mid-failover). The reconnect strategy's catch block only logged, so
recovery silently stopped. Now also calls scheduleReconnect(err) — the
pendingReconnect guard makes it idempotent if an error event was also
emitted.
- Reject negative values for the new replication-recovery env vars and
cap exit codes at 255.
- Convert the new ReplicationErrorRecovery{Deps,} interfaces to type
aliases to match the repo's TypeScript style.
- Tighten the reconnect dep comment to drop a stale "lastAcknowledgedLsn"
reference (the wrapper-tracked resume LSN is what callers actually pass).
- Restore process.exit after service.shutdown() in the exit-strategy
test so a delayed exit timer can't terminate the test worker.
bc57072 to
969dbdb
Compare
| reconnect: async () => { | ||
| await this._replicationClient.subscribe(this._latestCommitEndLsn ?? undefined); |
There was a problem hiding this comment.
🔴 Reconnect recovery silently stops when subscribe() fails without throwing (e.g., leader election failure)
The reconnect callback in both RunsReplicationService and SessionsReplicationService awaits subscribe() and only retries if it throws. However, subscribe() can fail without throwing in several cases — most notably when leader election fails (internal-packages/replication/src/client.ts:256-258): it emits "leaderElection", false, calls stop(), and returns this normally. No "error" event is emitted and no exception is thrown.
When this happens, the reconnect callback resolves successfully, the catch block in scheduleReconnect (replicationErrorRecovery.server.ts:93-105) is never reached, no further reconnect is scheduled, and the "start" event never fires so notifyStreamStarted() is never called. The replication client stays stopped (isStopped === true) and the service is silently dead — no data flows to ClickHouse until the process is restarted.
How subscribe() can fail silently
In LogicalReplicationClient.subscribe(), leader election failure at line 256-258 returns this.stop() without throwing or emitting an error event. The _isStopped flag remains true (set by stop()) because the replicationStart event — which resets it to false at line 334 — never fires.
| reconnect: async () => { | |
| await this._replicationClient.subscribe(this._latestCommitEndLsn ?? undefined); | |
| reconnect: async () => { | |
| await this._replicationClient.subscribe(this._latestCommitEndLsn ?? undefined); | |
| if (this._replicationClient.isStopped) { | |
| throw new Error("Reconnect subscribe completed but replication client is stopped"); | |
| } | |
| }, |
Was this helpful? React with 👍 or 👎 to provide feedback.
| reconnect: async () => { | ||
| await this._replicationClient.subscribe(this._latestCommitEndLsn ?? undefined); |
There was a problem hiding this comment.
🔴 Same silent-failure reconnect gap in SessionsReplicationService
Same issue as in RunsReplicationService: the reconnect callback at sessionsReplicationService.server.ts:246-247 awaits subscribe() but doesn't verify the stream actually started. If subscribe() returns normally but the client is stopped (e.g., leader election failed at internal-packages/replication/src/client.ts:256-258), no exception is thrown, the catch in scheduleReconnect (replicationErrorRecovery.server.ts:93-105) is never reached, and the service silently stops replicating with no further recovery attempts.
| reconnect: async () => { | |
| await this._replicationClient.subscribe(this._latestCommitEndLsn ?? undefined); | |
| reconnect: async () => { | |
| await this._replicationClient.subscribe(this._latestCommitEndLsn ?? undefined); | |
| if (this._replicationClient.isStopped) { | |
| throw new Error("Reconnect subscribe completed but replication client is stopped"); | |
| } | |
| }, |
Was this helpful? React with 👍 or 👎 to provide feedback.
Summary
When the logical-replication stream errored (most commonly after a Postgres failover), the runs and sessions replication services logged the error and left the underlying client stopped. The host process kept running, the WAL backed up, and ClickHouse silently fell behind.
Fix
Both services now run a configurable recovery strategy on stream errors, defaulting to in-process reconnect with exponential backoff so a fresh self-hosted setup heals on its own.
reconnect(default) — re-subscribe with exponential backoff (1s → 60s cap, unlimited attempts).LogicalReplicationClient.subscribe(lastLsn)re-validates the publication, re-acquires the leader lock, and resumes from the last acknowledged LSN.exit—process.exit(1)after a short flush window so a host supervisor (Dockerrestart=always, systemd, k8s) can replace the process.log— preserves the old behaviour.Per-service strategy + exit knobs are env-driven (
RUN_REPLICATION_ERROR_STRATEGY/SESSION_REPLICATION_ERROR_STRATEGY+*_EXIT_DELAY_MS,*_EXIT_CODE). Reconnect tuning is shared across both services (REPLICATION_RECONNECT_INITIAL_DELAY_MS,_MAX_DELAY_MS,_MAX_ATTEMPTS;MAX_ATTEMPTS=0means unlimited).Test plan
Integration tests cover all three strategies by simulating a failover with
pg_terminate_backendagainst the WAL sender:reconnect— kill the backend, insert a new row, assert it lands in ClickHouseexit— kill the backend, assertprocess.exit(1)is calledlog— kill the backend, insert a new row, assert it does not land in ClickHousepnpm --filter webapp test --run runsReplicationService.errorRecovery