Documentation
Introduction
Configuration
- HTTPProxy Fundamentals
- Gateway API Support
- Ingress v1 Support
- Virtual Hosts
- Inclusion and Delegation
- TLS Termination
- Upstream TLS
- Request Routing
- External Service Routing
- Request Rewriting
- CORS
- Websockets
- Upstream Health Checks
- Client Authorization
- TLS Delegation
- Rate Limiting
- Access logging
- Cookie Rewriting
- Overload Manager
- JWT Verification
- IP Filtering
- Annotations Reference
- Slow Start Mode
- Tracing Support
- API Reference
Deployment
- Deployment Options
- Contour Configuration
- Upgrading Contour
- Enabling TLS between Envoy and Contour
- Redeploy Envoy
Guides
- Deploying Contour on AWS with NLB
- AWS Network Load Balancer TLS Termination with Contour
- Deploying HTTPS services with Contour and cert-manager
- External Authorization Support
- FIPS 140-2 in Contour
- Using Gatekeeper with Contour
- Using Gateway API with Contour
- Global Rate Limiting
- Configuring ingress to gRPC services with Contour
- Health Checking
- Creating a Contour-compatible kind cluster
- Collecting Metrics with Prometheus
- How to Configure PROXY Protocol v1/v2 Support
- Contour/Envoy Resource Limits
Troubleshooting
- Envoy Administration Access
- Contour Debug Logging
- Envoy Debug Logging
- Visualize the Contour Graph
- Show Contour xDS Resources
- Profiling Contour
- Envoy Container Stuck in Unready State
Resources
- Support Policy
- Compatibility Matrix
- Contour Deprecation Policy
- Release Process
- Frequently Asked Questions
- Tagging
Security
Contribute
Envoy container stuck in unready/draining state
It’s possible for the Envoy containers to become stuck in an unready/draining state. This is an unintended side effect of the shutdown-manager sidecar container being restarted by the kubelet. For more details on exactly how this happens, see this issue.
If you observe Envoy containers in this state, you should kubectl delete
them to allow new Pods to be created to replace them.
To make this issue less likely to occur, you should:
- ensure you have resource requests on all your containers
- ensure you do not have a liveness probe on the shutdown-manager sidecar container in the envoy daemonset (this was removed from the example YAML in Contour 1.24.0).
If the above are not sufficient for preventing the issue, you may also add a liveness probe to the envoy container itself, like the following:
livenessProbe:
httpGet:
path: /ready
port: 8002
initialDelaySeconds: 15
periodSeconds: 5
failureThreshold: 6
This will cause the kubelet to restart the envoy container if it does get stuck in this state, resulting in a return to normal operations load balancing traffic. Note that in this case, it’s possible that a graceful drain of connections may or may not occur, depending on the exact sequence of operations that preceded the envoy container failing the liveness probe.