Skip to content

fix(webapp): auto-recover replication services after stream errors#3613

Open
ericallam wants to merge 2 commits into
mainfrom
fix/replication-auto-recover-on-stream-error
Open

fix(webapp): auto-recover replication services after stream errors#3613
ericallam wants to merge 2 commits into
mainfrom
fix/replication-auto-recover-on-stream-error

Conversation

@ericallam
Copy link
Copy Markdown
Member

Summary

When the logical-replication stream errored (most commonly after a Postgres failover), the runs and sessions replication services logged the error and left the underlying client stopped. The host process kept running, the WAL backed up, and ClickHouse silently fell behind.

Fix

Both services now run a configurable recovery strategy on stream errors, defaulting to in-process reconnect with exponential backoff so a fresh self-hosted setup heals on its own.

  • reconnect (default) — re-subscribe with exponential backoff (1s → 60s cap, unlimited attempts). LogicalReplicationClient.subscribe(lastLsn) re-validates the publication, re-acquires the leader lock, and resumes from the last acknowledged LSN.
  • exitprocess.exit(1) after a short flush window so a host supervisor (Docker restart=always, systemd, k8s) can replace the process.
  • log — preserves the old behaviour.

Per-service strategy + exit knobs are env-driven (RUN_REPLICATION_ERROR_STRATEGY / SESSION_REPLICATION_ERROR_STRATEGY + *_EXIT_DELAY_MS, *_EXIT_CODE). Reconnect tuning is shared across both services (REPLICATION_RECONNECT_INITIAL_DELAY_MS, _MAX_DELAY_MS, _MAX_ATTEMPTS; MAX_ATTEMPTS=0 means unlimited).

Test plan

Integration tests cover all three strategies by simulating a failover with pg_terminate_backend against the WAL sender:

  • reconnect — kill the backend, insert a new row, assert it lands in ClickHouse
  • exit — kill the backend, assert process.exit(1) is called
  • log — kill the backend, insert a new row, assert it does not land in ClickHouse
pnpm --filter webapp test --run runsReplicationService.errorRecovery

@changeset-bot
Copy link
Copy Markdown

changeset-bot Bot commented May 13, 2026

⚠️ No Changeset found

Latest commit: 969dbdb

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 13, 2026

Review Change Stack

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: ca6ebd6b-ed33-48ad-8732-f33ab77c8b5f

📥 Commits

Reviewing files that changed from the base of the PR and between bc57072 and 969dbdb.

📒 Files selected for processing (8)
  • .server-changes/replication-error-recovery.md
  • apps/webapp/app/env.server.ts
  • apps/webapp/app/services/replicationErrorRecovery.server.ts
  • apps/webapp/app/services/runsReplicationInstance.server.ts
  • apps/webapp/app/services/runsReplicationService.server.ts
  • apps/webapp/app/services/sessionsReplicationInstance.server.ts
  • apps/webapp/app/services/sessionsReplicationService.server.ts
  • apps/webapp/test/runsReplicationService.errorRecovery.test.ts
✅ Files skipped from review due to trivial changes (1)
  • .server-changes/replication-error-recovery.md
🚧 Files skipped from review as they are similar to previous changes (7)
  • apps/webapp/app/services/sessionsReplicationInstance.server.ts
  • apps/webapp/app/services/runsReplicationService.server.ts
  • apps/webapp/test/runsReplicationService.errorRecovery.test.ts
  • apps/webapp/app/services/runsReplicationInstance.server.ts
  • apps/webapp/app/services/sessionsReplicationService.server.ts
  • apps/webapp/app/env.server.ts
  • apps/webapp/app/services/replicationErrorRecovery.server.ts
📜 Recent review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: webapp / 🧪 Unit Tests: Webapp (1, 8)
  • GitHub Check: webapp / 🧪 Unit Tests: Webapp (7, 8)
  • GitHub Check: webapp / 🧪 Unit Tests: Webapp (5, 8)
  • GitHub Check: webapp / 🧪 Unit Tests: Webapp (3, 8)
  • GitHub Check: webapp / 🧪 Unit Tests: Webapp (4, 8)
  • GitHub Check: webapp / 🧪 Unit Tests: Webapp (8, 8)
  • GitHub Check: webapp / 🧪 Unit Tests: Webapp (2, 8)
  • GitHub Check: webapp / 🧪 Unit Tests: Webapp (6, 8)
  • GitHub Check: e2e-webapp / 🧪 E2E Tests: Webapp
  • GitHub Check: typecheck / typecheck
  • GitHub Check: audit
  • GitHub Check: Analyze (javascript-typescript)

Walkthrough

This PR adds configurable error recovery for the runs and sessions replication services. When a logical replication stream fails (e.g., during a database failover), the system can reconnect with exponential backoff, exit to let an external supervisor restart the host, or remain stopped with logging. Environment variables control per-service strategy selection and tuning. The implementation integrates into both services' lifecycle (on error, stream start, and shutdown) and is validated through containerized integration tests that force replication stream failures.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 9.09% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'fix(webapp): auto-recover replication services after stream errors' clearly and concisely summarizes the main change—adding automatic error recovery to replication services.
Description check ✅ Passed The PR description provides a comprehensive summary, detailed explanation of the fix with three strategies, environment variable configuration details, and test plan coverage.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/replication-auto-recover-on-stream-error

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

coderabbitai[bot]

This comment was marked as resolved.

devin-ai-integration[bot]

This comment was marked as resolved.

@ericallam ericallam force-pushed the fix/replication-auto-recover-on-stream-error branch from 6f8cc24 to 5ba46ff Compare May 14, 2026 09:33
devin-ai-integration[bot]

This comment was marked as resolved.

@ericallam ericallam force-pushed the fix/replication-auto-recover-on-stream-error branch from 5ba46ff to bc57072 Compare May 14, 2026 16:57
ericallam added 2 commits May 15, 2026 08:30
When the underlying logical-replication client errored (e.g. after a
Postgres failover), the runs and sessions replication services logged
the error and left the stream stopped. The host process kept running,
the WAL backed up, and ClickHouse silently fell behind.

Both services now run a configurable recovery strategy on stream errors,
defaulting to in-process reconnect with exponential backoff so a fresh
self-hosted setup heals on its own:

- "reconnect" (default) re-subscribes via the existing subscribe(lastLsn)
  path with exponential backoff (1s -> 60s cap, unlimited attempts), which
  re-validates the publication, re-acquires the leader lock, and resumes
  from the last acknowledged LSN.
- "exit" calls process.exit after a short flush window so a host's
  supervisor (Docker restart=always, systemd, k8s, etc.) can replace the
  process.
- "log" preserves the historical behaviour.

Per-service strategy + exit knobs are env-driven via
RUN_REPLICATION_ERROR_STRATEGY / SESSION_REPLICATION_ERROR_STRATEGY plus
matching *_EXIT_DELAY_MS / *_EXIT_CODE. Reconnect tuning is shared
across both services via REPLICATION_RECONNECT_INITIAL_DELAY_MS /
_MAX_DELAY_MS / _MAX_ATTEMPTS (0 = unlimited).
Addresses PR review feedback:

- LogicalReplicationClient.subscribe() can throw before its internal
  "error" listener is wired up (notably when pg client.connect() fails
  mid-failover). The reconnect strategy's catch block only logged, so
  recovery silently stopped. Now also calls scheduleReconnect(err) — the
  pendingReconnect guard makes it idempotent if an error event was also
  emitted.
- Reject negative values for the new replication-recovery env vars and
  cap exit codes at 255.
- Convert the new ReplicationErrorRecovery{Deps,} interfaces to type
  aliases to match the repo's TypeScript style.
- Tighten the reconnect dep comment to drop a stale "lastAcknowledgedLsn"
  reference (the wrapper-tracked resume LSN is what callers actually pass).
- Restore process.exit after service.shutdown() in the exit-strategy
  test so a delayed exit timer can't terminate the test worker.
@ericallam ericallam force-pushed the fix/replication-auto-recover-on-stream-error branch from bc57072 to 969dbdb Compare May 15, 2026 07:31
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 2 new potential issues.

View 11 additional findings in Devin Review.

Open in Devin Review

Comment on lines +265 to +266
reconnect: async () => {
await this._replicationClient.subscribe(this._latestCommitEndLsn ?? undefined);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Reconnect recovery silently stops when subscribe() fails without throwing (e.g., leader election failure)

The reconnect callback in both RunsReplicationService and SessionsReplicationService awaits subscribe() and only retries if it throws. However, subscribe() can fail without throwing in several cases — most notably when leader election fails (internal-packages/replication/src/client.ts:256-258): it emits "leaderElection", false, calls stop(), and returns this normally. No "error" event is emitted and no exception is thrown.

When this happens, the reconnect callback resolves successfully, the catch block in scheduleReconnect (replicationErrorRecovery.server.ts:93-105) is never reached, no further reconnect is scheduled, and the "start" event never fires so notifyStreamStarted() is never called. The replication client stays stopped (isStopped === true) and the service is silently dead — no data flows to ClickHouse until the process is restarted.

How subscribe() can fail silently

In LogicalReplicationClient.subscribe(), leader election failure at line 256-258 returns this.stop() without throwing or emitting an error event. The _isStopped flag remains true (set by stop()) because the replicationStart event — which resets it to false at line 334 — never fires.

Suggested change
reconnect: async () => {
await this._replicationClient.subscribe(this._latestCommitEndLsn ?? undefined);
reconnect: async () => {
await this._replicationClient.subscribe(this._latestCommitEndLsn ?? undefined);
if (this._replicationClient.isStopped) {
throw new Error("Reconnect subscribe completed but replication client is stopped");
}
},
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +246 to +247
reconnect: async () => {
await this._replicationClient.subscribe(this._latestCommitEndLsn ?? undefined);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Same silent-failure reconnect gap in SessionsReplicationService

Same issue as in RunsReplicationService: the reconnect callback at sessionsReplicationService.server.ts:246-247 awaits subscribe() but doesn't verify the stream actually started. If subscribe() returns normally but the client is stopped (e.g., leader election failed at internal-packages/replication/src/client.ts:256-258), no exception is thrown, the catch in scheduleReconnect (replicationErrorRecovery.server.ts:93-105) is never reached, and the service silently stops replicating with no further recovery attempts.

Suggested change
reconnect: async () => {
await this._replicationClient.subscribe(this._latestCommitEndLsn ?? undefined);
reconnect: async () => {
await this._replicationClient.subscribe(this._latestCommitEndLsn ?? undefined);
if (this._replicationClient.isStopped) {
throw new Error("Reconnect subscribe completed but replication client is stopped");
}
},
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant