Documentation
Introduction
Configuration
- HTTPProxy Fundamentals
- Ingress v1 Support
- Virtual Hosts
- Inclusion and Delegation
- TLS Termination
- Upstream TLS
- Request Routing
- External Service Routing
- Request Rewriting
- CORS
- Websockets
- Upstream Health Checks
- Client Authorization
- TLS Delegation
- Rate Limiting
- Access logging
- Cookie Rewriting
- Overload Manager
- JWT Verification
- IP Filtering
- Annotations Reference
- Slow Start Mode
- Tracing Support
- API Reference
Deployment
- Deployment Options
- Contour Configuration
- Upgrading Contour
- Enabling TLS between Envoy and Contour
- Redeploy Envoy
Guides
- Deploying Contour on AWS with NLB
- AWS Network Load Balancer TLS Termination with Contour
- Deploying HTTPS services with Contour and cert-manager
- External Authorization Support
- FIPS 140-2 in Contour
- Using Gatekeeper with Contour
- Using Gateway API with Contour
- Global Rate Limiting
- Configuring ingress to gRPC services with Contour
- Health Checking
- How to enable structured JSON logging
- Creating a Contour-compatible kind cluster
- Collecting Metrics with Prometheus
- How to Configure PROXY Protocol v1/v2 Support
- Contour/Envoy Resource Limits
Troubleshooting
- Envoy Administration Access
- Contour Debug Logging
- Envoy Debug Logging
- Visualize the Contour Graph
- Show Contour xDS Resources
- Profiling Contour
- Contour Operator
- Envoy Container Stuck in Unready State
Resources
- Support Policy
- Compatibility Matrix
- Contour Deprecation Policy
- Release Process
- Frequently Asked Questions
- Tagging
Security
Contribute
This is an advanced deployment guide to configure Contour on AWS with the Network Load Balancer (NLB). This configuration has several advantages:
- NLBs are often cheaper. This is especially true for development. Idle LBs do not cost money.
- There are no extra network hops. Traffic goes to the NLB, to the node hosting Contour, and then to the target pod.
- Source IP addresses are retained. Envoy (running as part of Contour) sees the native source IP address and records this with an
X-Forwarded-For
header.
Moving parts
- We run Envoy as a DaemonSet across the cluster and Contour as a deployment
- The Envoy pod runs on host ports 80 and 443 on the node
- Host networking means that traffic hits Envoy without transitioning through any other fancy networking hops
- Contour also binds to 8001 for Envoy->Contour config traffic.
Deploying Contour
- Clone the Contour repository and cd into the repo
- Edit the Envoy service (
02-service-envoy.yaml
) in theexamples/contour
directory:- Remove the existing annotation:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
- Add the following annotation:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
- Remove the existing annotation:
- Run
kubectl apply -f examples/contour
This creates the projectcontour
Namespace along with a ServiceAccount, RBAC rules, Contour Deployment and an Envoy DaemonSet.
It also creates the NLB based loadbalancer for you.
You can get the address of your NLB via:
$ kubectl get service envoy --namespace=projectcontour -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
Test
You can now test your NLB.
- Install a workload (see the kuard example in the main deployment guide).
- Look up the address for your NLB in the AWS console and enter it in your browser.
- Notice that Envoy fills out
X-Forwarded-For
, because it was the first to see the traffic directly from the browser.