Documentation
Introduction
Configuration
- HTTPProxy Fundamentals
- Gateway API Support
- Ingress v1 Support
- Virtual Hosts
- Inclusion and Delegation
- TLS Termination
- Upstream TLS
- Request Routing
- External Service Routing
- Request Rewriting
- CORS
- Websockets
- Upstream Health Checks
- Client Authorization
- TLS Delegation
- Rate Limiting
- Access logging
- Cookie Rewriting
- Overload Manager
- JWT Verification
- IP Filtering
- Annotations Reference
- Slow Start Mode
- Tracing Support
- API Reference
Deployment
- Deployment Options
- Contour Configuration
- Upgrading Contour
- Enabling TLS between Envoy and Contour
- Redeploy Envoy
Guides
- Deploying Contour on AWS with NLB
- AWS Network Load Balancer TLS Termination with Contour
- Deploying HTTPS services with Contour and cert-manager
- External Authorization Support
- FIPS 140-2 in Contour
- Using Gatekeeper with Contour
- Using Gateway API with Contour
- Global Rate Limiting
- Configuring ingress to gRPC services with Contour
- Health Checking
- Creating a Contour-compatible kind cluster
- Collecting Metrics with Prometheus
- How to Configure PROXY Protocol v1/v2 Support
- Contour/Envoy Resource Limits
Troubleshooting
- Troubleshooting Common Proxy Errors
- Envoy Administration Access
- Contour Debug Logging
- Envoy Debug Logging
- Visualize the Contour Graph
- Show Contour xDS Resources
- Profiling Contour
- Envoy Container Stuck in Unready State
Resources
- Support Policy
- Compatibility Matrix
- Contour Deprecation Policy
- Release Process
- Frequently Asked Questions
- Tagging
- Adopters
- Ecosystem
Security
Contribute
Contour and Envoy expose metrics that can be scraped with Prometheus. By
default, annotations to gather them are in all the deployment
yamls and they
should work out of the box with most configurations.
Envoy Metrics
Envoy typically exposes metrics through an endpoint on its admin interface. To avoid exposing the entire admin interface to Prometheus (and other workloads in the cluster), Contour configures a static listener that sends traffic to the stats endpoint and nowhere else.
Envoy supports Prometheus-compatible /stats/prometheus
endpoint for metrics on
port 8002
.
Contour Metrics
Contour exposes a Prometheus-compatible /metrics
endpoint that defaults to listening on port 8000. This can be configured by using the --http-address
and --http-port
flags for the serve
command.
Note: the Service
deployment manifest when installing Contour must be updated to represent the same port as the configured flag.
The metrics endpoint exposes the following metrics:
Name | Type | Labels | Description |
---|---|---|---|
contour_build_info | GAUGE | branch, revision, version | Build information for Contour. Labels include the branch and git SHA that Contour was built from, and the Contour version. |
contour_cachehandler_onupdate_duration_seconds | SUMMARY | Histogram for the runtime of xDS cache regeneration. | |
contour_dag_cache_object | GAUGE | kind | Total number of items that are currently in the DAG cache. |
contour_dagrebuild_seconds | SUMMARY | Duration in seconds of DAG rebuilds | |
contour_dagrebuild_timestamp | GAUGE | Timestamp of the last DAG rebuild. | |
contour_dagrebuild_total | COUNTER | Total number of times DAG has been rebuilt since startup | |
contour_eventhandler_operation_total | COUNTER | kind, op | Total number of Kubernetes object changes Contour has received by operation and object kind. |
contour_httpproxy | GAUGE | namespace | Total number of HTTPProxies that exist regardless of status. |
contour_httpproxy_invalid | GAUGE | namespace, vhost | Total number of invalid HTTPProxies. |
contour_httpproxy_orphaned | GAUGE | namespace | Total number of orphaned HTTPProxies which have no root delegating to them. |
contour_httpproxy_root | GAUGE | namespace | Total number of root HTTPProxies. Note there will only be a single root HTTPProxy per vhost. |
contour_httpproxy_valid | GAUGE | namespace, vhost | Total number of valid HTTPProxies. |
contour_status_update_conflict_total | COUNTER | kind | Number of status update conflicts encountered by object kind. |
contour_status_update_duration_seconds | SUMMARY | error, kind | How long a status update takes to finish. |
contour_status_update_failed_total | COUNTER | kind | Number of status updates that failed by object kind. |
contour_status_update_noop_total | COUNTER | kind | Number of status updates that are no-ops by object kind. This is a subset of successful status updates. |
contour_status_update_success_total | COUNTER | kind | Number of status updates that succeeded by object kind. |
contour_status_update_total | COUNTER | kind | Total number of status updates by object kind. |
Sample Deployment
In the /examples
directory there are example deployment files that can be used to spin up an example environment.
All deployments there are configured with annotations for prometheus to scrape by default, so it should be possible to utilize any of them with the following quickstart example instructions.
Deploy Prometheus
A sample deployment of Prometheus and Alertmanager is provided that uses temporary storage. This deployment can be used for testing and development, but might not be suitable for all environments.
Stateful Deployment
A stateful deployment of Prometheus should use persistent storage with Persistent Volumes and Persistent Volume Claims to maintain a correlation between a data volume and the Prometheus Pod. Persistent volumes can be static or dynamic and depends on the backend storage implementation utilized in environment in which the cluster is deployed. For more information, see the Kubernetes documentation on types of persistent volumes.
Quick start
# Deploy
$ kubectl apply -f examples/prometheus
Access the Prometheus web UI
$ kubectl -n projectcontour-monitoring port-forward $(kubectl -n projectcontour-monitoring get pods -l app=prometheus -l component=server -o jsonpath='{.items[0].metadata.name}') 9090:9090
then go to http://localhost:9090
in your browser.
Access the Alertmanager web UI
$ kubectl -n projectcontour-monitoring port-forward $(kubectl -n projectcontour-monitoring get pods -l app=prometheus -l component=alertmanager -o jsonpath='{.items[0].metadata.name}') 9093:9093
then go to http://localhost:9093
in your browser.
Deploy Grafana
A sample deployment of Grafana is provided that uses temporary storage.
Quick start
# Deploy
$ kubectl apply -f examples/grafana/
# Create secret with grafana credentials
$ kubectl create secret generic grafana -n projectcontour-monitoring \
--from-literal=grafana-admin-password=admin \
--from-literal=grafana-admin-user=admin
Access the Grafana UI
$ kubectl port-forward $(kubectl get pods -l app=grafana -n projectcontour-monitoring -o jsonpath='{.items[0].metadata.name}') 3000 -n projectcontour-monitoring
then go to http://localhost:3000
in your browser.
The username and password are from when you defined the Grafana secret in the previous step.