kube-oidc-proxy: A proxy to consistently authenticate to managed Kubernetes clusters, on multi-cloud, using OIDC

At Jetstack, we see many customers that are moving to managed Kubernetes services across multiple clouds to run their workloads. Whilst having the Kubernetes control plane managed for you takes away a lot of the operational burden, there is a trade-off for this convenience, with less opportunity for customisation with a managed control plane. Furthermore, across multiple clouds, there is often a lack of consistency in what is exposed. One such inconsistency is in authentication. Cloud providers typically provide deep integrations with their own authentication systems, however interoperability between them can often be a pain point.

Jetstack is excited to announce kube-oidc-proxy, a new open-source project that brings back consistency, and the lost functionality of authenticating via OIDC to the Kubernetes API server on managed services, across clouds.

The kube-oidc-proxy is a reverse proxy that sits in front of the Kubernetes API server that receives requests from users, authenticates using the OIDC protocol, and forwards the request to the API server, returning the result. This gives the control of user identity back into the hands of cluster administrators, rather than the bespoke identity provided by cloud vendors, and enables consistency of authorization of these identities in the form of RBAC.

What is OIDC anyway?

OIDC or, OpenID Connect, is a protocol that extends the existing OAuth 2.0 protocol. OAuth 2.0 is a popular method for authorizing applications to a resource server, using some identity provider such as a social media website or other account holding platform. You have probably come across this before in the form of “Sign in with Google” for example. This gives the application, with the user’s consent, authorization to access some protected resources of the user. OpenID Connect extends on this to include user identity built into the resulting tokens which the resource server will verify and consume. This enables authentication for clients accessing applications, using the identity issued by some third party provider.

The OIDC flow involves a user requesting a JSON Web Token from the identity provider which is made of two base64 encoded strings along with a signature, delimited by a dot. It holds an appropriately scoped list of attributes of the user, such as an email address or name, with the header containing extra information about the token itself, such as the signature algorithm. The signature of the token has been signed by the identity provider. A typical OIDC JSON web token will look like the following:

  "alg": "RS256",
  "iss": "https://my-identity.provider.io",
  "sub": "12345",
  "aud": "resource-server.123",
  "exp": 1555586839,
  "iat": 1555500439,
  "email": "joshua.vanleeuwen@jetstack.io",
  "name": "Joshua Van Leeuwen"

Along with containing a number of user attributes, the token contains other favourable features, namely, an expiry (exp), meaning tokens can be short lived and will eventually become invalid, as well as one or more audiences (aud), which is what resource server the token has been issued for. Audiences prevent a rogue resource server from reusing the same token on some other resource server, maliciously impersonating as the authenticated user.

OIDC in Kubernetes

The Kubernetes API server can be configured to accept OIDC tokens as the method of authenticating users.

First, the Client ID is set to the value which would appear as an audience member in the token. When the API server verifies the token, it will ensure that this value must be present in the audiences claim, otherwise the request is rejected. The Username claim needs to be set so that claim can be used as the user identity when the request is passed through to the Kubernetes RBAC; although organisation specific, this would typically be the email or name of the user.

Finally, the issuer URL and issuer certificate authority file are required. The URL should be set to the base domain of the issuer. On boot, the API server will do a discovery for the public signing keys of the issuer at the endpoint /.well-known/openid-configuration on the issuer base URL. These signing keys are then used to verify the signature on incoming tokens.


How kube-oidc-proxy works

When using managed providers, direct configuration of the API server is unavailable to us, so there is no way to enable these OIDC options. Instead, we can set up a reverse proxy in front of the API sever to do the authentication. The kube-oidc-proxy typically sits inside the cluster, and securely serves to the outside world. Once its discovery has been successful, the proxy is ready to take user requests.

Once they are received, the proxy server will authenticate the token in the request’s header using the same internals as the Kubernetes API server. If we fail to authenticate the request, we respond with a 401 Unauthorized to the user. Next, the server will check whether the request contains any impersonation headers that the client has sent. Since the server utilises impersonation to forward the request to the API server, any request received by the server with impersonation headers is rejected with a 403 Forbidden response. The error message we return in the body of the forbidden response is displayed to the client when using kubectl. This gives a nice user experience detailing what went wrong.

Error from server (Forbidden): Impersonation requests are disabled when using kube-oidc-proxy

Once we have authenticated the request by verifying the signature based on the providers public keys, we can insert the impersonation headers, built from the token we were given. The impersonated user is given by the value of the Username claim configured, as well as assigning any groups listed in the token, with the source claim coming from any configured group claim.

The request then gets sent on to the API server, it being a clone of the original, plus the authenticated, impersonated headers. The request’s authentication is also replaced with the kube-oidc-proxy’s chosen authentication method to the API server, typically a bearer token linked to a Kubernetes Service Account. The Service Account will need RBAC permissions to impersonate any user or group, cluster-wide.

Since the OIDC token is verified offline, after we have discovered the public key of the provider, processing requests is completely fairly quickly. We also have the benefit of the proxy being stateless meaning it scales well in a Kubernetes cluster.

kube-oidc-proxy diagram

Deploying kube-oidc-proxy

We’ve written a multi-cluster tutorial that explains how to deploy and configure kube-oidc-proxy into multiple clouds.

Deploying the kube-oidc-proxy also requires other supporting tooling to have a fully functional and featured deployment. We use cert-manager, another Jetstack open-source project that enables automatic provisioning and renewal of TLS certificates in Kubernetes. This will be used to back kube-oidc-proxy as well as other dependencies in the cluster.

Dex is a server to enable access to OIDC identity providers and is deployable to Kubernetes. The server enables multiple ‘connectors’ that expose OIDC issuers such as GitHub, Linkedin or even simple Username and Passwords. Along with Dex is gangway, a Heptio project which is a web server that facilitates the OAuth browser flow via Dex and provides a convenient kubeconfig to be downloaded once authenticated. When deploying into multi-cloud, the Dex deployment will be shared between clusters, however care should be taken to ensure that the tokens issued by different clusters are not using the same audience so the validity of tokens is only scoped to a single cluster.

Finally, kube-oidc-proxy is deployed along with configuration to accept identity issued by Dex.


We think kube-oidc-proxy is a tool that many people will find useful, especially users of multi-cloud. It is currently in an experimental stage so we’d like to get feedback on what people think about the project (good or bad!). Try it out, and let us know what you think.

kube-oidc-proxy demo