Documentation
Introduction
Configuration
- HTTPProxy Fundamentals
- Ingress v1 Support
- Virtual Hosts
- Inclusion and Delegation
- TLS Termination
- Upstream TLS
- Request Routing
- External Service Routing
- Request Rewriting
- CORS
- Websockets
- Upstream Health Checks
- Client Authorization
- TLS Delegation
- Rate Limiting
- Access logging
- Cookie Rewriting
- Overload Manager
- JWT Verification
- Annotations Reference
- Slow Start Mode
- API Reference
Deployment
- Deployment Options
- Contour Configuration
- Upgrading Contour
- Enabling TLS between Envoy and Contour
- Redeploy Envoy
Guides
- Deploying Contour on AWS with NLB
- AWS Network Load Balancer TLS Termination with Contour
- Deploying HTTPS services with Contour and cert-manager
- External Authorization Support
- FIPS 140-2 in Contour
- Using Gatekeeper with Contour
- Using Gateway API with Contour
- Global Rate Limiting
- Configuring ingress to gRPC services with Contour
- Health Checking
- How to enable structured JSON logging
- Creating a Contour-compatible kind cluster
- Collecting Metrics with Prometheus
- How to Configure PROXY Protocol v1/v2 Support
- Contour/Envoy Resource Limits
Troubleshooting
- Envoy Administration Access
- Contour Debug Logging
- Envoy Debug Logging
- Visualize the Contour Graph
- Show Contour xDS Resources
- Profiling Contour
- Contour Operator
Resources
- Support Policy
- Compatibility Matrix
- Contour Deprecation Policy
- Release Process
- Frequently Asked Questions
- Tagging
Security
Contribute
Performance Testing Contour / Envoy
- Cluster Specs
- Kubernetes
- Version: v1.12.6
- Nodes:
- 5 Worker Nodes
- 2 CPUs Per Node
- 8 GB RAM Per Node
- 10 GB Network
- 5 Worker Nodes
- Contour
- Single Instance
- 4 Instances of Envoy running in a Daemonset
- Each instance of Envoy is running with HostNetwork
- Cluster Network Bandwidth
- Single Instance
- Kubernetes
Having a good understanding of the available bandwidth is key when it comes to analyzing performance. It will give you a sense of how many requests per second you can expect to push through the network you are working with.
Use iperf3 to figure out the bandwidth available between two of the kubernetes nodes. The following will deploy an iperf3 server on one node, and an iperf3 client on another node:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 34.7 GBytes 4.96 Gbits/sec 479 sender
[ 4] 0.00-60.00 sec 34.7 GBytes 4.96 Gbits/sec receiver
Memory / CPU usage
Verify the Memory & CPU usage with varying numbers of services, IngressRoute resources, and traffic load into the cluster.
Test Criteria | Contour | Envoy | |||||
#Svc | #Ing | RPS | CC | Memory (MB) | CPU% / Core | Memory (MB) | CPU% / Core |
0 | 0 | 0 | 0 | 10 | 0 | 15 | 0 |
5k | 0 | 0 | 0 | 46 | 2% | 15 | 0% |
10k | 0 | 0 | 0 | 77 | 3% | 205 | 2% |
0 | 5k | 0 | 0 | 36 | 1% | 230 | 2% |
0 | 10k | 0 | 0 | 63 | 1% | 10 | 1% |
5k | 5k | 0 | 0 | 244 | 1% | 221 | 1% |
10k | 10k | 0 | 0 | 2600 | 6% | 430 | 4% |
0 | 0 | 30k | 600 | 8 | 1% | 17 | 3% |
0 | 0 | 100k | 10k | 10 | 1% | 118 | 14% |
0 | 0 | 200k | 20k | 9 | 1% | 191 | 31% |
0 | 0 | 300k | 30k | 10 | 1% | 225 | 40% |