You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Linkerd proxy inbound listeners currently use a fixed TCP accept backlog
(observed as 128 via ss -ltnp) that cannot be configured by operators.
In high-traffic environments, especially during Kubernetes rollouts
where many sidecars simultaneously establish new outbound connections to
newly-ready pods, this fixed backlog can become a limiting factor. When
a connection burst exceeds the proxy’s accept queue capacity, incoming
connections are temporarily dropped or delayed at the TCP level, leading
to short-lived connection failures such as:
```
{"timestamp":"2025-12-12T19:55:11.333411Z","level":"WARN","fields":{"message":"Failed to connect","error":"connect timed out after 1s"},"target":"linkerd_reconnect","threadId":"ThreadId(1)"}
```
Because the proxy backlog is not configurable or documented, operators
have no direct way to tune Linkerd for services that experience high
fan-in or connection storms (for example during rollouts, autoscaling
events, or traffic rebalancing).
This commit introduces support for two new environment variables:
- `LINKERD2_PROXY_INBOUND_TCP_LISTEN_BACKLOG`
- `LINKERD2_PROXY_OUTBOUND_TCP_LISTEN_BACKLOG`
these can be configured using the `proxy.additionalEnv` field in the
Linkerd Helm chart.
Signed-off-by: Aurel Canciu <aurel.canciu@nexhealth.com>
0 commit comments