Documentation
Introduction
Configuration
- HTTPProxy Fundamentals
- Gateway API Support
- Ingress v1 Support
- Virtual Hosts
- Inclusion and Delegation
- TLS Termination
- Upstream TLS
- Request Routing
- External Service Routing
- Request Rewriting
- CORS
- Websockets
- Upstream Health Checks
- Client Authorization
- TLS Delegation
- Rate Limiting
- Access logging
- Cookie Rewriting
- Overload Manager
- JWT Verification
- IP Filtering
- Annotations Reference
- Slow Start Mode
- Tracing Support
- API Reference
Deployment
- Deployment Options
- Contour Configuration
- Upgrading Contour
- Enabling TLS between Envoy and Contour
- Redeploy Envoy
Guides
- Deploying Contour on AWS with NLB
- AWS Network Load Balancer TLS Termination with Contour
- Deploying HTTPS services with Contour and cert-manager
- External Authorization Support
- FIPS 140-2 in Contour
- Using Gatekeeper with Contour
- Using Gateway API with Contour
- Global Rate Limiting
- Configuring ingress to gRPC services with Contour
- Health Checking
- Creating a Contour-compatible kind cluster
- Collecting Metrics with Prometheus
- How to Configure PROXY Protocol v1/v2 Support
- Contour/Envoy Resource Limits
Troubleshooting
- Troubleshooting Common Proxy Errors
- Envoy Administration Access
- Contour Debug Logging
- Envoy Debug Logging
- Visualize the Contour Graph
- Show Contour xDS Resources
- Profiling Contour
- Envoy Container Stuck in Unready State
Resources
- Support Policy
- Compatibility Matrix
- Contour Deprecation Policy
- Release Process
- Frequently Asked Questions
- Tagging
- Adopters
- Ecosystem
Security
Contribute
Deployment Options
The
Getting Started guide shows you a simple way to get started with Contour on your cluster.
This topic explains the details and shows you additional options.
Most of this covers running Contour using a Kubernetes Service of Type: LoadBalancer.
If you don’t have a cluster with that capability see the
Running without a Kubernetes LoadBalancer section.
Installation
Contour requires a secret containing TLS certificates that are used to secure the gRPC communication between Contour<>Envoy.
This secret can be auto-generated by the Contour certgen job or provided by an administrator.
Traffic must be forwarded to Envoy, typically via a Service of type: LoadBalancer.
All other requirements such as RBAC permissions, configuration details, are provided or have good defaults for most installations.
Setting resource requests and limits
It is recommended that resource requests and limits be set on all Contour and Envoy containers. The example YAML manifests used in the Getting Started guide do not include these, because the appropriate values can vary widely from user to user. The table below summarizes the Contour and Envoy containers, and provides some reasonable resource requests to start with (note that these should be adjusted based on observed usage and expected load):
| Workload | Container | Request (mem) | Request (cpu) |
|---|---|---|---|
| deployment/contour | contour | 128Mi | 250m |
| daemonset/envoy | envoy | 256Mi | 500m |
| daemonset/envoy | shutdown-manager | 50Mi | 25m |
Envoy as Daemonset
The recommended installation is for Contour to run as a Deployment and Envoy to run as a Daemonset.
The example Damonset places a single instance of Envoy per node in the cluster as well as attaches to hostPorts on each node.
This model allows for simple scaling of Envoy instances as well as ensuring even distribution of instances across the cluster.
The example daemonset manifest or Contour Gateway Provisioner will create an installation based on these recommendations.
Note: If the size of the cluster is scaled down, connections can be lost since Kubernetes Damonsets do not follow proper preStop hooks.
Envoy as Deployment
An alternative Envoy deployment model is utilizing a Kubernetes Deployment with a configured podAntiAffinity which attempts to mirror the Daemonset deployment model.
A benefit of this model compared to the Daemonset version is when a node is removed from the cluster, the proper shutdown events are available so connections can be cleanly drained from Envoy before terminating.
The example deployment manifest will create an installation based on these recommendations.
Testing your installation
Get your hostname or IP address
To retrieve the IP address or DNS name assigned to your Contour deployment, run:
$ kubectl get -n projectcontour service envoy -o wide
On AWS, for example, the response looks like:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
contour 10.106.53.14 a47761ccbb9ce11e7b27f023b7e83d33-2036788482.ap-southeast-2.elb.amazonaws.com 80:30274/TCP 3h app=contour
Depending on your cloud provider, the EXTERNAL-IP value is an IP address, or, in the case of Amazon AWS, the DNS name of the ELB created for Contour. Keep a record of this value.
Note that if you are running an Elastic Load Balancer (ELB) on AWS, you must add more details to your configuration to get the remote address of your incoming connections. See the instructions for enabling the PROXY protocol.
Minikube
On Minikube, to get the IP address of the Contour service run:
$ minikube service -n projectcontour envoy --url
The response is always an IP address, for example http://192.168.99.100:30588. This is used as CONTOUR_IP in the rest of the documentation.
kind
When creating the cluster on Kind, pass a custom configuration to allow Kind to expose port 80/443 to your local host:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 80
hostPort: 80
listenAddress: "0.0.0.0"
- containerPort: 443
hostPort: 443
listenAddress: "0.0.0.0"
Then run the create cluster command passing the config file as a parameter.
This file is in the examples/kind directory:
$ kind create cluster --config examples/kind/kind-expose-port.yaml
Then, your CONTOUR_IP (as used below) will just be localhost:80.
Note: We’ve created a public DNS record (local.projectcontour.io) which is configured to resolve to `127.0.0.1``. This allows you to use a real domain name in your kind cluster.
Test with Ingress
The Contour repository contains an example deployment of the Kubernetes Up and Running demo application,
kuard.
To test your Contour deployment, deploy kuard with the following command:
$ kubectl apply -f https://projectcontour.io/examples/kuard.yaml
Then monitor the progress of the deployment with:
$ kubectl get po,svc,ing -l app=kuard
You should see something like:
NAME READY STATUS RESTARTS AGE
po/kuard-370091993-ps2gf 1/1 Running 0 4m
po/kuard-370091993-r63cm 1/1 Running 0 4m
po/kuard-370091993-t4dqk 1/1 Running 0 4m
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kuard 10.110.67.121 <none> 80/TCP 4m
NAME HOSTS ADDRESS PORTS AGE
ing/kuard * 10.0.0.47 80 4m
… showing that there are three Pods, one Service, and one Ingress that is bound to all virtual hosts (*).
In your browser, navigate your browser to the IP or DNS address of the Contour Service to interact with the demo application.
Test with HTTPProxy
To test your Contour deployment with HTTPProxy, run the following command:
$ kubectl apply -f https://projectcontour.io/examples/kuard-httpproxy.yaml
Then monitor the progress of the deployment with:
$ kubectl get po,svc,httpproxy -l app=kuard
You should see something like:
NAME READY STATUS RESTARTS AGE
pod/kuard-bcc7bf7df-9hj8d 1/1 Running 0 1h
pod/kuard-bcc7bf7df-bkbr5 1/1 Running 0 1h
pod/kuard-bcc7bf7df-vkbtl 1/1 Running 0 1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kuard ClusterIP 10.102.239.168 <none> 80/TCP 1h
NAME FQDN TLS SECRET FIRST ROUTE STATUS STATUS DESCRIPT
httpproxy.projectcontour.io/kuard kuard.local <SECRET NAME IF TLS USED> valid valid HTTPProxy
… showing that there are three Pods, one Service, and one HTTPProxy .
In your terminal, use curl with the IP or DNS address of the Contour Service to send a request to the demo application:
$ curl -H 'Host: kuard.local' ${CONTOUR_IP}
Running without a Kubernetes LoadBalancer
If you can’t or don’t want to use a Service of type: LoadBalancer there are other ways to run Contour.
NodePort Service
If your cluster doesn’t have the capability to configure a Kubernetes LoadBalancer,
or if you want to configure the load balancer outside Kubernetes,
you can change the Envoy Service in the
02-service-envoy.yaml file and set type to NodePort.
This will have every node in your cluster listen on the resultant port and forward traffic to Contour.
That port can be discovered by taking the second number listed in the PORT column when listing the service, for example 30274 in 80:30274/TCP.
Now you can point your browser at the specified port on any node in your cluster to communicate with Contour.
Host Networking
You can run Contour without a Kubernetes Service at all.
This is done by having the Envoy pod run with host networking.
Contour’s examples utilize this model in the /examples directory.
To configure, set: hostNetwork: true and dnsPolicy: ClusterFirstWithHostNet on your Envoy pod definition.
Next, pass --envoy-service-http-port=80 --envoy-service-https-port=443 to the contour serve command which instructs Envoy to listen directly on port 80/443 on each host that it is running.
This is best paired with a DaemonSet (perhaps paired with Node affinity) to ensure that a single instance of Contour runs on each Node.
See the
AWS NLB tutorial as an example.
Disabling Features
You can run Contour with certain features disabled by passing --disable-feature flag to the Contour serve command.
The flag is used to disable the informer for a custom resource, effectively making the corresponding CRD optional in the cluster.
You can provide the flag multiple times.
For example, to disable ExtensionService CRD, use the flag as follows: --disable-feature=extensionservices.
See the configuration section entry for all options.
Upgrading Contour/Envoy
At times, it’s needed to upgrade Contour, the version of Envoy, or both.
The included shutdown-manager can assist with watching Envoy for open connections while draining and give signal back to Kubernetes as to when it’s fine to delete Envoy pods during this process.
See the redeploy envoy docs for more information about how to not drop active connections to Envoy. Also see the upgrade guides on steps to roll out a new version of Contour.
Running Multiple Instances of Contour
It’s possible to run multiple instances of Contour within a single Kubernetes cluster.
This can be useful for separating external vs. internal ingress, for having separate ingress controllers for different ingress classes, and more.
Each Contour instance can also be configured via the --watch-namespaces flag to handle their own namespaces. This allows the Kubernetes RBAC objects
to be restricted further.
The recommended way to deploy multiple Contour instances is to put each instance in its own namespace. This avoids most naming conflicts that would otherwise occur, and provides better logical separation between the instances. However, it is also possible to deploy multiple instances in a single namespace if needed; this approach requires more modifications to the example manifests to function properly. Each approach is described in detail below, using the examples/contour directory’s manifests for reference.
In Separate Namespaces (recommended)
In general, this approach requires updating the namespace of all resources, as well as giving unique names to cluster-scoped resources to avoid conflicts.
00-common.yaml:- update the name of the
Namespace - update the namespace of both
ServiceAccounts
- update the name of the
01-contour-config.yaml:- update the namespace of the
ConfigMap - if you have any namespaced references within the ConfigMap contents (e.g.
fallback-certificate,envoy-client-certificate), ensure those point to the correct namespace as well.
- update the namespace of the
01-crds.yamlwill be shared between the two instances; no changes are needed.02-job-certgen.yaml:- update the namespace of all resources
- update the namespace of the
ServiceAccountsubject within theRoleBinding
02-role-contour.yaml:- update the name of the
ClusterRoleto be unique - update the namespace of the
Role
- update the name of the
02-rbac.yaml:- update the name of the
ClusterRoleBindingto be unique - update the namespace of the
RoleBinding - update the namespaces of the
ServiceAccountsubject within both resources - update the name of the ClusterRole within the ClusterRoleBinding’s roleRef to match the unique name used in
02-role-contour.yaml
- update the name of the
02-service-contour.yaml:- update the namespace of the
Service
- update the namespace of the
02-service-envoy.yaml:- update the namespace of the
Service
- update the namespace of the
03-contour.yaml:- update the namespace of the
Deployment - add an argument to the container,
--ingress-class-name=<unique ingress class>, so this instance only processes Ingresses/HTTPProxies with the given ingress class.
- update the namespace of the
03-envoy.yaml:- update the namespace of the
DaemonSet - remove the two
hostPortdefinitions from the container (otherwise, these would conflict between the two instances)
- update the namespace of the
In The Same Namespace
This approach requires giving unique names to all resources to avoid conflicts, and updating all resource references to use the correct names.
00-common.yaml:- update the names of both
ServiceAccountsto be unique
- update the names of both
01-contour-config.yaml:- update the name of the
ConfigMapto be unique
- update the name of the
01-crds.yamlwill be shared between the two instances; no changes are needed.02-job-certgen.yaml:- update the names of all resources to be unique
- update the name of the
Rolewithin theRoleBinding’s roleRef to match the unique name used for theRole - update the name of the
ServiceAccountwithin theRoleBinding’s subjects to match the unique name used for theServiceAccount - update the serviceAccountName of the
Job - add an argument to the container,
--secrets-name-suffix=<unique suffix>, so the generated TLS secrets have unique names - update the spec.template.metadata.labels on the
Jobto be unique
02-role-contour.yaml:- update the names of the
ClusterRoleandRoleto be unique
- update the names of the
02-rbac.yaml:- update the names of the
ClusterRoleBindingandRoleBindingto be unique - update the roleRefs within both resources to reference the unique
RoleandClusterRolenames used in02-role-contour.yaml - update the subjects within both resources to reference the unique
ServiceAccountname used in00-common.yaml
- update the names of the
02-service-contour.yaml:- update the name of the
Serviceto be unique - update the selector to be unique (this must match the labels used in
03-contour.yaml, below)
- update the name of the
02-service-envoy.yaml:- update the name of the
Serviceto be unique - update the selector to be unique (this must match the labels used in
03-envoy.yaml, below)
- update the name of the
03-contour.yaml:- update the name of the
Deploymentto be unique - update the metadata.labels, the spec.selector.matchLabels, the spec.template.metadata.labels, and the spec.template.spec.affinity.podAntiAffinity labels to match the labels used in
02-service-contour.yaml - update the serviceAccountName to match the unique name used in
00-common.yaml - update the
contourcertvolume to reference the uniqueSecretname generated from02-certgen.yaml(e.g.contourcert<unique-suffix>) - update the
contour-configvolume to reference the uniqueConfigMapname used in01-contour-config.yaml - add an argument to the container,
--leader-election-resource-name=<unique lease name>, so this Contour instance uses a separate leader electionLease - add an argument to the container,
--envoy-service-name=<unique envoy service name>, referencing the unique name used in02-service-envoy.yaml - add an argument to the container,
--ingress-class-name=<unique ingress class>, so this instance only processes Ingresses/HTTPProxies with the given ingress class.
- update the name of the
03-envoy.yaml:- update the name of the
DaemonSetto be unique - update the metadata.labels, the spec.selector.matchLabels, and the spec.template.metadata.labels to match the unique labels used in
02-service-envoy.yaml - update the
--xds-addressargument to the initContainer to use the unique name of the contour Service from02-service-contour.yaml - update the serviceAccountName to match the unique name used in
00-common.yaml - update the
envoycertvolume to reference the uniqueSecretname generated from02-certgen.yaml(e.g.envoycert<unique-suffix>) - remove the two
hostPortdefinitions from the container (otherwise, these would conflict between the two instances)
- update the name of the
Using the Gateway provisioner
The Contour Gateway provisioner also supports deploying multiple instances of Contour, either in the same namespace or different namespaces.
See
Getting Started with the Gateway provisioner for more information on getting started with the Gateway provisioner.
To deploy multiple Contour instances, you create multiple Gateways, either in the same namespace or in different namespaces.
Note that although the provisioning request itself is made via a Gateway API resource (Gateway), this method of installation still allows you to use any of the supported APIs for defining virtual hosts and routes: Ingress, HTTPProxy, or Gateway API’s HTTPRoute and TLSRoute.
If you are using Ingress or HTTPProxy, you will likely want to assign each Contour instance a different ingress class, so they each handle different subsets of Ingress/HTTPProxy resources.
To do this,
create two separate GatewayClasses, each with a different ContourDeployment parametersRef.
The ContourDeployment specs should look like:
kind: ContourDeployment
apiVersion: projectcontour.io/v1alpha1
metadata:
namespace: projectcontour
name: ingress-class-1
spec:
runtimeSettings:
ingress:
classNames:
- ingress-class-1
---
kind: ContourDeployment
apiVersion: projectcontour.io/v1alpha1
metadata:
namespace: projectcontour
name: ingress-class-2
spec:
runtimeSettings:
ingress:
classNames:
- ingress-class-2
Then create each Gateway with the appropriate spec.gatewayClassName.
Running Contour in tandem with another ingress controller
If you’re running multiple ingress controllers, or running on a cloudprovider that natively handles ingress,
you can specify the annotation kubernetes.io/ingress.class: "contour" on all ingresses that you would like Contour to claim.
You can customize the class name with the --ingress-class-name flag at runtime. (A comma-separated list of class names is allowed.)
If the kubernetes.io/ingress.class annotation is present with a value other than "contour", Contour will ignore that ingress.
Uninstall Contour
To remove Contour or the Contour Gateway Provisioner from your cluster, delete the namespace:
$ kubectl delete ns projectcontour
Note: Your namespace may differ from above.
Twitter
Slack