Documentation
Introduction
Configuration
- HTTPProxy Fundamentals
- Ingress v1 Support
- Virtual Hosts
- Inclusion and Delegation
- TLS Termination
- Upstream TLS
- Request Routing
- External Service Routing
- Request Rewriting
- CORS
- Websockets
- Upstream Health Checks
- Client Authorization
- TLS Delegation
- Rate Limiting
- Access logging
- Cookie Rewriting
- Overload Manager
- JWT Verification
- IP Filtering
- Annotations Reference
- Slow Start Mode
- Tracing Support
- API Reference
Deployment
- Deployment Options
- Contour Configuration
- Upgrading Contour
- Enabling TLS between Envoy and Contour
- Redeploy Envoy
Guides
- Deploying Contour on AWS with NLB
- AWS Network Load Balancer TLS Termination with Contour
- Deploying HTTPS services with Contour and cert-manager
- External Authorization Support
- FIPS 140-2 in Contour
- Using Gatekeeper with Contour
- Using Gateway API with Contour
- Global Rate Limiting
- Configuring ingress to gRPC services with Contour
- Health Checking
- How to enable structured JSON logging
- Creating a Contour-compatible kind cluster
- Collecting Metrics with Prometheus
- How to Configure PROXY Protocol v1/v2 Support
- Contour/Envoy Resource Limits
Troubleshooting
- Envoy Administration Access
- Contour Debug Logging
- Envoy Debug Logging
- Visualize the Contour Graph
- Show Contour xDS Resources
- Profiling Contour
- Contour Operator
- Envoy Container Stuck in Unready State
Resources
- Support Policy
- Compatibility Matrix
- Contour Deprecation Policy
- Release Process
- Frequently Asked Questions
- Tagging
Security
Contribute
Example gRPC Service
The below examples use the
gRPC server used in Contour end to end tests.
The server implements a service yages.Echo
with two methods Ping
and Reverse
.
It also implements the
gRPC health checking service (see
here for more details) and is bundled with the
gRPC health probe.
An example base deployment and service for a gRPC server utilizing plaintext HTTP/2 are provided here:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: grpc-echo
name: grpc-echo
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: grpc-echo
template:
metadata:
labels:
app.kubernetes.io/name: grpc-echo
spec:
containers:
- name: grpc-echo
image: ghcr.io/projectcontour/yages:v0.1.0
ports:
- name: grpc
containerPort: 9000
readinessProbe:
exec:
command: ["/grpc-health-probe", "-addr=localhost:9000"]
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: grpc-echo
name: grpc-echo
spec:
selector:
app.kubernetes.io/name: grpc-echo
ports:
- port: 9000
protocol: TCP
targetPort: grpc
HTTPProxy Configuration
Configuring proxying to a gRPC service with HTTPProxy is as simple as specifying the protocol Envoy uses with the upstream application via the spec.routes[].services[].protocol
field.
For example, in the resource below, for proxying plaintext gRPC to the yages
sample app, the protocol is set to h2c
to denote HTTP/2 over cleartext.
For TLS secured gRPC, the protocol used would be h2
.
Route path prefix matching can be used to match a specific gRPC message if required.
---
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: my-grpc-service
spec:
virtualhost:
fqdn: my-grpc-service.foo.com
routes:
- conditions:
- prefix: /yages.Echo/Ping # Matches a specific gRPC method.
services:
- name: grpc-echo
port: 9000
protocol: h2c
- conditions:
- prefix: / # Matches everything else.
services:
- name: grpc-echo
port: 9000
protocol: h2c
Using the sample deployment above along with this HTTPProxy example, you can test calling this plaintext gRPC server with the following grpcurl command:
grpcurl -plaintext -authority=my-grpc-service.foo.com <load balancer IP and port if needed> yages.Echo/Ping
If implementing a streaming RPC, it is likely you will need to adjust per-route timeouts to ensure streams are kept alive for the appropriate durations needed.
Relevant timeout fields to adjust include the HTTPProxy spec.routes[].timeoutPolicy.response
field which defaults to 15s and should be increased as well as the global timeout policy configurations in the Contour configuration file timeouts.request-timeout
and timeouts.max-connection-duration
.
Ingress v1 Configuration
To configure routing for gRPC requests with Ingress v1, you must add an annotation on the upstream Service resource as below.
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: grpc-echo
annotations:
projectcontour.io/upstream-protocol.h2c: "9000"
name: grpc-echo
spec:
selector:
app.kubernetes.io/name: grpc-echo
ports:
- port: 9000
protocol: TCP
targetPort: grpc
The annotation key must follow the form projectcontour.io/upstream-protocol.{protocol}
where {protocol}
is h2c
for plaintext gRPC or h2
for TLS encrypted gRPC to the upstream application.
The annotation value contains a comma-separated list of port names and/or numbers that must match with the ones defined in the Service definition.
Using the Service above with the Ingress resource below should achieve the same configuration as with an HTTPProxy.
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-grpc-service
spec:
rules:
- host: my-grpc-service.foo.com
http:
paths:
- path: /
backend:
service:
name: grpc-echo
port:
number: 9000
pathType: Prefix
Gateway API Configuration
Gateway API now supports a specific resource GRPCRoute for routing gRPC requests.
Configuring GRPCRoute for routing gRPC requests needs to specify parentRefs, hostnames, and routing rules with specific backendRefs. In the below example, route path matching is conducted via method matching rule for declared services and their methods.
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
name: yages
spec:
parentRefs:
- namespace: projectcontour
name: contour
hostnames:
- my-grpc-service.foo.com
rules:
- matches:
- method:
service: yages.Echo
method: Ping
- method:
service: grpc.reflection.v1alpha.ServerReflection
method: ServerReflectionInfo
backendRefs:
- name: grpc-echo
port: 9000
Using the sample deployment above along with this GRPCRoute example, you can test calling this plaintext gRPC server with the same grpcurl command:
grpcurl -plaintext -authority=my-grpc-service.foo.com <load balancer IP and port if needed> yages.Echo/Ping
Note that the second matching method for service of ServerReflection is required by grpcurl command.
When using GRPCRoute, user should annotate their Service similarly to when using Ingress Configuration, to indicate the protocol to use when connecting to the backend Service, i.e. h2c for HTTP plaintext and h2 for TLS encrypted HTTPS. If it’s not specified, Contour will infer the protocol based on the Gateway Listener protocol, h2c for HTTP and h2 for HTTPS.
gRPC-Web
Contour configures Envoy to automatically convert gRPC-Web HTTP/1 requests to gRPC over HTTP/2 RPC calls to an upstream service. This is a convenience addition to make usage of gRPC web application client libraries and the like easier.
Note that you still must provide configuration of the upstream protocol to have gRPC-Web requests converted to gRPC to the upstream app. If your upstream application does not in fact support gRPC, you may get a protocol error. In that case, please see this issue.
For example, with the example deployment and routing configuration provided above, an example HTTP/1.1 request and response via curl
looks like:
curl \
-s -v \
<load balancer IP and port if needed>/yages.Echo/Ping \
-XPOST \
-H 'Host: my-grpc-service.foo.com' \
-H 'Content-Type: application/grpc-web-text' \
-H 'Accept: application/grpc-web-text' \
-d'AAAAAAA='
This curl
command sends and receives gRPC messages as base 64 encoded text over HTTP/1.1.
Piping the output to base64 -d | od -c
we can see the raw text gRPC response:
0000000 \0 \0 \0 \0 006 \n 004 p o n g 200 \0 \0 \0 036
0000020 g r p c - s t a t u s : 0 \r \n g
0000040 r p c - m e s s a g e : \r \n
0000056