Let's Encrypt, OAuth2, and Kubernetes dashboards

Banner photo: Let's Encrypt CC BY-NC 4.0

When you expose HTTP services from your Kubernetes cluster, you often need to consider access authorisation. A natural place to look is the ingress controller, which can provide some basic support, for example for username and password-based access control. But if you want more flexibility and power you may want to look at Bitly's Oauth2 Proxy. If you're using an NGINX Ingress Controller, their docs have an example on using OAuth2 Proxy + Kubernetes-Dashboard. This blog follows along those docs at a slightly higher level, making some (hopefully reasonable) assumptions about the cluster. The gist is that we want to authorise a client visiting our protected resource based on a subrequest to a service responsible for Oauth2 authorisation. In this post our protected resource will also be the Kubernetes Dashboard, but this isn't specific to the Kubernetes Dashboard in any way.

I'll assume that any HTTP services you're exposing are over TLS—and I recommend using Let's Encrypt via JetStack's Certificate Manager. This blog post shows ingress annotations specific for Jetstack's Certificate Manager to automatically get a TLS certificate - but only minor adjustments should be necessary to use lego + traefik or another method. Instead of directly posting deployment and ingress resources to the cluster using kubectl, I'll show how to use helm charts.

Note that automated TLS provisioning using Let's Encrypt can break when faced with authorisation requests, so we'll have to be careful about the order we apply ingress annotations. I've approached this in two ways: by using the DNS challenge type to prove ownership of the domain to Let's Encrypt, or by carefully deploying and updating with different ingress annotations.

First, I'm going to assume you have a Kubernetes Dashboard that was deployed using helm and doesn't currently have ingress enabled. To enable oauth2-proxy we are going to deploy it alongside the Kubernetes Dashboard. You will end up with two ingresses:

  • /oauth2 pointing to the oauth2-proxy service
  • / pointing to your Kubernetes Dashboard service

The Kubernetes Dashboard service will also be annotated to tell NGINX to authorise users using the oauth2 endpoint. This piece is very specific to the NGINX Ingress Controller, it is worth noting that alternatively oauth2-proxy can instead proxy traffic to the upstream service.

Configure identity provider

Step one is to choose your cloud identity provider and create an application. For example I'll use GitHub. Create an OAuth app on GitHub: https://github.com/settings/applications/new.

As stated in the nginx-ingress docs:

  • Homepage URL is the FQDN in the Ingress rule. This will be how users will access your service https://foo.bar.com.
  • Authorisation callback URL is the same as the base FQDN plus /oauth2, like https://foo.bar.com/oauth2.

Configure and deploy oauth2 proxy

To deploy the proxy we will use the stable helm chart—oauth2-proxy. First, we create a values.yaml file:

# Oauth client configuration specifics  
config:  
  # OAuth client ID  
  clientID: "<REPLACE ME>"  
  # OAuth client secret  
  clientSecret: "<REPLACE ME>"  
  # Create a new secret with the following command  
  # python -c 'import os,base64; print base64.b64encode(os.urandom(16))'  
  cookieSecret: "XXX"  
  # Custom configuration file see https://github.com/bitly/oauth2_proxy/blob/master/contrib/oauth2_proxy.cfg.example
  configFile: |-  
    ## Pass OAuth Access token to upstream via "X-Forwarded-Access-Token"
    pass_access_token = true
    #upstreams = [  
    #    "https://kubernetes-dashboard:8443/"  
    #]  

extraArgs:  
  provider: "github"  
  # limit access to members of a github organisation:
  github-org: "n1analytics"
  #email-domain: "*"  
  upstream: "file:///dev/null"  
  http-address: "0.0.0.0:4180"  

ingress:  
 enabled: true  
  path: /oauth2  
  hosts:  
  - foo.bar.com  
  annotations:  
    kubernetes.io/ingress.class: nginx  
    certmanager.k8s.io/issuer: letsencrypt
  tls:  
  # certmanager will use the letsencrypt issues to create the TLS secret in the namespace if they don't exist
  - secretName: foo-bar-com-tls  
    hosts:  
    - foo.bar.com

Note that you can restrict access by email domain and/or GitHub organisation. When you've replaced with your own values, make it so:

$ helm install stable/oauth2-proxy --namespace=kube-system -f values.yaml --name=k8s-dash-oauth2

Update service to require authorisation

On the Kubernetes Dashboard side we need to enable and configure the ingress. Update your dashboard's values file:

ingress:
  enabled: true
  annotations:
    certmanager.k8s.io/issuer: letsencrypt
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/auth-signin: https://foo.bar.com/oauth2/start
    nginx.ingress.kubernetes.io/auth-url: https://foo.bar.com/oauth2/auth
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/secure-backends: "true"
    kubernetes.io/ingress.allow-http: "false"
  hosts:
  - foo.bar.com
  tls:
  - hosts:
    - foo.bar.com
    secretName: foo-bar-com-tls

These ingress annotations ensure that:

  • the dashboard is only accessed via TLS - both by the client and from the ingress controller to the service
  • certmanager will be responsible for keeping a TLS Certificate up to date

Perform an upgrade to enable the ingress point:

$ helm upgrade --namespace=kube-system k8s-dash stable/kubernetes-dashboard -f values.yaml

Now check that the Kubernetes Dashboard is available and protected by a Let's Encrypt certificate and GitHub access control.

github login screen
Success

As a member of the Github organisation n1analytics I can login and check the deployment in the kube-system namespace:

Happy deployments on k8s
The kubernetes dashboard reporting that the kubernetes-dashboard deployment went well.

Job done!