- HTTPProxy Fundamentals
- Ingress v1 Support
- Virtual Hosts
- Inclusion and Delegation
- TLS Termination
- Upstream TLS
- Request Routing
- External Service Routing
- Request Rewriting
- Upstream Health Checks
- Client Authorization
- TLS Delegation
- Rate Limiting
- Access logging
- Cookie Rewriting
- Overload Manager
- JWT Verification
- Annotations Reference
- Slow Start Mode
- API Reference
- Deployment Options
- Contour Configuration
- Upgrading Contour
- Enabling TLS between Envoy and Contour
- Redeploy Envoy
- Deploying Contour on AWS with NLB
- AWS Network Load Balancer TLS Termination with Contour
- Deploying HTTPS services with Contour and cert-manager
- External Authorization Support
- FIPS 140-2 in Contour
- Using Gatekeeper with Contour
- Using Gateway API with Contour
- Global Rate Limiting
- Configuring ingress to gRPC services with Contour
- Health Checking
- How to enable structured JSON logging
- Creating a Contour-compatible kind cluster
- Collecting Metrics with Prometheus
- How to Configure PROXY Protocol v1/v2 Support
- Contour/Envoy Resource Limits
- Envoy Administration Access
- Contour Debug Logging
- Envoy Debug Logging
- Visualize the Contour Graph
- Show Contour xDS Resources
- Profiling Contour
- Contour Operator
- Envoy Container Stuck in Unready State
- Support Policy
- Compatibility Matrix
- Contour Deprecation Policy
- Release Process
- Frequently Asked Questions
Example gRPC Service
The below examples use the
gRPC server used in Contour end to end tests.
The server implements a service
yages.Echo with two methods
It also implements the
gRPC health checking service (see
here for more details) and is bundled with the
gRPC health probe.
An example base deployment and service for a gRPC server utilizing plaintext HTTP/2 are provided here:
--- apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/name: grpc-echo name: grpc-echo spec: replicas: 2 selector: matchLabels: app.kubernetes.io/name: grpc-echo template: metadata: labels: app.kubernetes.io/name: grpc-echo spec: containers: - name: grpc-echo image: ghcr.io/projectcontour/yages:v0.1.0 ports: - name: grpc containerPort: 9000 readinessProbe: exec: command: ["/grpc-health-probe", "-addr=localhost:9000"] --- apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/name: grpc-echo name: grpc-echo spec: selector: app.kubernetes.io/name: grpc-echo ports: - port: 9000 protocol: TCP targetPort: grpc
Configuring proxying to a gRPC service with HTTPProxy is as simple as specifying the protocol Envoy uses with the upstream application via the
For example, in the resource below, for proxying plaintext gRPC to the
yages sample app, the protocol is set to
h2c to denote HTTP/2 over cleartext.
For TLS secured gRPC, the protocol used would be
Route path prefix matching can be used to match a specific gRPC message if required.
--- apiVersion: projectcontour.io/v1 kind: HTTPProxy metadata: name: my-grpc-service spec: virtualhost: fqdn: my-grpc-service.foo.com routes: - conditions: - prefix: /yages.Echo/Ping # Matches a specific gRPC method. services: - name: grpc-echo port: 9000 protocol: h2c - conditions: - prefix: / # Matches everything else. services: - name: grpc-echo port: 9000 protocol: h2c
Using the sample deployment above along with this HTTPProxy example, you can test calling this plaintext gRPC server with the following grpccurl command:
grpcurl -plaintext -authority=my-grpc-service.foo.com <load balancer IP and port if needed> yages.Echo/Ping
If implementing a streaming RPC, it is likely you will need to adjust per-route timeouts to ensure streams are kept alive for the appropriate durations needed.
Relevant timeout fields to adjust include the HTTPProxy
spec.routes.timeoutPolicy.response field which defaults to 15s and should be increased as well as the global timeout policy configurations in the Contour configuration file
Ingress v1 Configuration
To configure routing for gRPC requests with Ingress v1, you must add an annotation on the upstream Service resource as below.
--- apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/name: grpc-echo annotations: projectcontour.io/upstream-protocol.h2c: "9000" name: grpc-echo spec: selector: app.kubernetes.io/name: grpc-echo ports: - port: 9000 protocol: TCP targetPort: grpc
The annotation key must follow the form
h2c for plaintext gRPC or
h2 for TLS encrypted gRPC to the upstream application.
The annotation value contains a comma-separated list of port names and/or numbers that must match with the ones defined in the Service definition.
Using the Service above with the Ingress resource below should achieve the same configuration as with an HTTPProxy.
--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-grpc-service spec: rules: - host: my-grpc-service.foo.com http: paths: - path: / backend: service: name: grpc-echo port: number: 9000 pathType: Prefix
Gateway API Configuration
At the moment, configuring gRPC routes with Gateway API resources is achieved by the same method as with Ingress v1: annotation to select a protocol and port on a Service referenced by HTTPRoute
Gateway API does include a specific resource GRPCRoute for routing gRPC requests. This may be supported in future versions of Contour.
Contour configures Envoy to automatically convert gRPC-Web HTTP/1 requests to gRPC over HTTP/2 RPC calls to an upstream service. This is a convenience addition to make usage of gRPC web application client libraries and the like easier.
Note that you still must provide configuration of the upstream protocol to have gRPC-Web requests converted to gRPC to the upstream app. If your upstream application does not in fact support gRPC, you may get a protocol error. In that case, please see this issue.
For example, with the example deployment and routing configuration provided above, an example HTTP/1.1 request and response via
curl looks like:
curl \ -s -v \ <load balancer IP and port if needed>/yages.Echo/Ping \ -XPOST \ -H 'Host: my-grpc-service.foo.com' \ -H 'Content-Type: application/grpc-web-text' \ -H 'Accept: application/grpc-web-text' \ -d'AAAAAAA='
curl command sends and receives gRPC messages as base 64 encoded text over HTTP/1.1.
Piping the output to
base64 -d | od -c we can see the raw text gRPC response:
0000000 \0 \0 \0 \0 006 \n 004 p o n g 200 \0 \0 \0 036 0000020 g r p c - s t a t u s : 0 \r \n g 0000040 r p c - m e s s a g e : \r \n 0000056