Skip to content

fix(instance.controller): emit remove.instance even when logout fails (zombie cleanup)#2520

Open
inovacaoecrescimento wants to merge 2 commits intoEvolutionAPI:mainfrom
inovacaoecrescimento:fix/zombie-cleanup-on-logout-fail
Open

fix(instance.controller): emit remove.instance even when logout fails (zombie cleanup)#2520
inovacaoecrescimento wants to merge 2 commits intoEvolutionAPI:mainfrom
inovacaoecrescimento:fix/zombie-cleanup-on-logout-fail

Conversation

@inovacaoecrescimento
Copy link
Copy Markdown

@inovacaoecrescimento inovacaoecrescimento commented Apr 27, 2026

Summary

When a Baileys WebSocket dies but the in-memory waInstances[name] entry still exists (a "zombie" instance), deleteInstance() in src/api/controllers/instance.controller.ts calls await this.logout() which throws Connection Closed. The outer try/catch then propagates a BadRequestException and eventEmitter.emit('remove.instance') is never reached — leaving the zombie in waInstances forever, only fixable by restarting the entire process.

This is the practical reason behind several long-standing reports about reconnect/sync issues:

In production, operators today have no per-instance recovery when a Baileys socket dies in a way that prevents logout() from completing. Their only escape is docker restart, which kicks every connected user off the host and forces a full QR re-scan for everyone.

Fix

Wrap the inner await this.logout(...) call in its own try/catch. Log the failure but proceed to the cleanup emit so remove.instance always fires. The in-memory entry must be purged regardless of whether logout completed cleanly — cleaningUp() (DB / Session / cache) and delete this.waInstances[name] both happen inside the remove.instance handler in monitor.service.ts.

       if (instance.state === 'connecting' || instance.state === 'open') {
-        await this.logout({ instanceName });
+        try {
+          await this.logout({ instanceName });
+        } catch (logoutError) {
+          this.logger.warn(
+            `[ZOMBIE-CLEANUP] logout failed for "${instanceName}" (likely zombie socket): ${logoutError?.toString?.() || logoutError}. Proceeding with cleanup.`,
+          );
+        }
       }

12 added, 1 changed line. No public API change. DELETE /instance/:name becomes idempotent against zombies: a caller can always recover a single instance without affecting other instances on the same host.

Why this is safe

remove.instance is the canonical, internal cleanup path. The handler in monitor.service.ts already runs cleaningUp() (deletes the Session row, sets connectionStatus='close' in Instance, rmSync of the on-disk session dir, clears Redis cache, removes provider session). All of that is independent of whether logout() completed gracefully on the WhatsApp side. If the socket is truly dead, talking to WhatsApp wasn't going to happen anyway — what we need is the local cleanup, and this fix unblocks it.

Tested

Patch deployed in production at Rigarr (Brazilian distributor, 14 active vendor instances, ~25k msgs/day) via Docker image overlay on v2.3.7:

  • Before: a single dead Baileys socket forced docker restart evo2_api (~30s downtime, all 14 vendors needed to re-scan QR).
  • After: DELETE /instance/:name returns {"status":"SUCCESS","response":{"message":"Instance deleted"}} even when the underlying socket is dead. Other vendors stay connected.

Smoke test before submitting: applied patch on 2.3.7 clone, ran npm ci && npm run build, no compile errors. Container boots cleanly. Tested DELETE on a known-zombie instance (pedro-rca340-sp, ~115h idle) — instance was correctly removed from memory (Instance "X" - REMOVED log line) and from Instance table.

Related context

We're maintaining a downstream Docker overlay (ghcr.io/inovacaoecrescimento/evolution-api-rigarr:2.3.7-zombiefix-1) until this lands upstream. We'd love to drop the overlay and go back to using the official image — happy to address review feedback or split the change differently if useful.

Refs:

Signed-off-by: Bruno Cavalcante Sgarbi

Summary by Sourcery

Bug Fixes:

  • Prevent zombie instances from remaining in memory by continuing deletion and cleanup even if the logout operation throws due to a dead socket.

When a Baileys WebSocket dies but the in-memory `waInstances[name]`
entry still exists (a "zombie" instance), `deleteInstance()` calls
`await this.logout()` which throws "Connection Closed". The throw
causes the outer try/catch to skip the `eventEmitter.emit('remove.instance')`
call — which is the only mechanism that purges the zombie from
`waInstances`.

Result: zombies persist in memory until the entire `evo2_api`
container is restarted, affecting ALL instances on the host (not
just the broken one). Operators have no per-instance recovery path
in v2.3.x — their only option is `docker restart`, which forces
every connected user to re-scan the QR code.

Fix: wrap the inner `logout()` call in its own try/catch. Log a
warning when it fails but continue to the cleanup emit. The
in-memory entry must be removed regardless of whether logout
completed cleanly — `remove.instance` is the canonical way to
purge a stuck instance, and DB/cache cleanup happens in the same
event handler.

This makes `DELETE /instance/:name` idempotent against zombies: a
caller can always recover a single instance without nuking the
whole host.

Refs:
- EvolutionAPI#693  (instance/restart closes the session)
- EvolutionAPI#1286 (Connection Closed in v2.2.3)
- EvolutionAPI#2026 (Sync lost after reboot)
- EvolutionAPI#2027 (Loss of synchronization on reboot)

Tested in production at Rigarr (14 instances, ~25k msgs/day) by
overlaying this patch on v2.3.7 via Docker. Before: any zombie
forced a full container restart. After: per-instance cleanup
works cleanly while other vendors stay connected.

Signed-off-by: Bruno Cavalcante Sgarbi <bcsgarbi@gmail.com>
@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Apr 27, 2026

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

Makes instance deletion resilient to Baileys WebSocket "zombie" instances by catching logout failures and always proceeding to emit the internal remove.instance cleanup event.

Sequence diagram for instance deletion with zombie cleanup

sequenceDiagram
  actor Client
  participant InstanceController
  participant BaileysSocket
  participant Logger
  participant EventEmitter
  participant MonitorService
  participant WaInstances

  Client->>InstanceController: DELETE /instance/:name
  InstanceController->>InstanceController: load instance by name
  InstanceController->>InstanceController: clearCacheChatwoot if enabled
  InstanceController->>InstanceController: check instance.state
  alt instance.state is connecting or open
    InstanceController->>BaileysSocket: logout(instanceName)
    BaileysSocket-->>InstanceController: Connection Closed error
    InstanceController->>InstanceController: catch logoutError
    InstanceController->>Logger: warn [ZOMBIE-CLEANUP] logout failed
  else instance.state is not connecting or open
    InstanceController->>InstanceController: skip logout
  end
  InstanceController->>EventEmitter: emit remove.instance(instanceName)
  EventEmitter->>MonitorService: remove.instance(instanceName)
  MonitorService->>MonitorService: cleaningUp()
  MonitorService->>WaInstances: delete entry for instanceName
  MonitorService->>MonitorService: finalize instance cleanup
  InstanceController-->>Client: 200 SUCCESS Instance deleted
Loading

File-Level Changes

Change Details Files
Make instance deletion idempotent and robust by catching logout failures and still triggering cleanup for zombie instances.
  • Wrap the logout call in a try/catch when deleting an instance whose state is connecting or open
  • On logout failure, log a structured warning explaining likely zombie socket behavior and that cleanup will still proceed
  • Preserve the existing control flow so that remove.instance continues to be emitted even if logout throws, ensuring in-memory and related resources are cleaned up
src/api/controllers/instance.controller.ts

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • The in-line comment and log message referencing RIGARR PATCH/[ZOMBIE-CLEANUP] are quite vendor-specific; consider rephrasing them in neutral, upstream-focused terms so the intent is clear without tying the behavior to a particular deployment.
  • In the catch block for logout, it may be helpful to log the full error object or stack (e.g., this.logger.warn('... ', logoutError);) rather than only toString(), to preserve diagnostic details if this path is hit unexpectedly.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The in-line comment and log message referencing `RIGARR PATCH`/`[ZOMBIE-CLEANUP]` are quite vendor-specific; consider rephrasing them in neutral, upstream-focused terms so the intent is clear without tying the behavior to a particular deployment.
- In the `catch` block for `logout`, it may be helpful to log the full error object or stack (e.g., `this.logger.warn('... ', logoutError);`) rather than only `toString()`, to preserve diagnostic details if this path is hit unexpectedly.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Per EvolutionAPI#2520 review:

1. Drop vendor-specific markers in code comment and log message
   (was '[ZOMBIE-CLEANUP]' and 'RIGARR PATCH'). Comment now describes
   the bug in upstream-friendly terms.

2. Pass the full error object to logger.warn instead of toString(),
   following the existing convention in monitor.service.ts
   ('no.connection' handler) where structured object logging is used
   to preserve diagnostic detail.

No behavior change.
@inovacaoecrescimento
Copy link
Copy Markdown
Author

Thanks for the review @sourcery-ai. Pushed a follow-up commit (93b9081a) addressing both points:

  1. Vendor-neutral language — dropped the [ZOMBIE-CLEANUP] prefix and the RIGARR PATCH marker from the comment. The block now just describes the bug in upstream-friendly terms ("logout can throw 'Connection Closed' when the underlying Baileys socket is already dead but waInstances[name] still exists").

  2. Preserve diagnostic detail — switched to structured object logging (this.logger.warn({ message, instanceName, error })) following the same pattern used in monitor.service.ts 'no.connection' handler. The full error object reaches the logger now, so stack/cause aren't lost.

Diff: 93b9081a

Happy to squash to one commit before merge if that's the project preference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant