Just Give Me the Code...

Fine... here you go.

Motivation

Kubernetes is one of the most active projects amongst the open source community today. It provides an extremely robust platform that orchestrates containers in a scalable and highly available way. However, finding a happy medium between infrastructure and application teams can be tricky. On one hand you want to put more power into the hands of your developers, but on the other you don’t want them to deploy apps that will negatively affect other applications running in your cluster.

To help motivate this problem, let's explore some examples in the form of user stories:

  • As an infrastructure engineer, I want to ensure that none of the multiple applications currently deployed in cluster that I manage use an obscene amount of resources.
  • As a security lead, I want to ensure that application are not exposing unnecessary ports on the hosts (assume all cluster traffic is proxied via a Kubernetes ingress controller).
  • As a test engineer that uses Kubernetes labels to facilitate canary deployments, I want to ensure that application teams apply the proper pre-production labels to their Kubernetes resources.
  • As a security engineer, I need to make sure that pods are not run with a privileged security context or utilize the host network.

Now you could go ahead and hire someone that would manually monitor this, or you could automate it entirely! One of the benefits that Kubernetes provides that not many people may know about is its extensibility. Similar to how you would create a Pod resource to run a container, you can create your own resource and define how it behaves.

I hacked together a proof of concept (poc) project that would provide a robust solution to the problems described above. The project is written in golang and heavily utilizes the Kubernetes libraries.

By now you might have a lot of questions about how to extend the Kubernetes api and create custom resources so let's get our hands dirty and go into detail.

Demo

If you want to follow along, I'll walk us through it from scratch.

First, let's download Minikube, a tool that bootstraps a lightweight Kubernetes cluster on your local development machine.

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.17.1/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

Now let's install kubectl, the Kubernetes command line interface.

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl

Time so start our local Kubernetes cluster (this might take a few minutes)

$ minikube start
Starting local Kubernetes cluster...  
Starting VM...  
SSH-ing files into VM...  
Setting up certs...  
Starting cluster components...  
Connecting to cluster...  
Setting up kubeconfig...  
Kubectl is now configured to use the cluster.  

Now suppose we are an application team that wants to deploy a new application represented by the following configuration file:

---
apiVersion: extensions/v1beta1  
kind: Deployment  
metadata:  
  labels:
    app: example
  name: example
  namespace: default
spec:  
  replicas: 1
  selector:
    matchLabels:
      k8s-app: example
  template:
    metadata:
      labels:
        k8s-app: example
    spec:
      containers:
      - name: hello-world
        imagePullPolicy: Always
        image: fbgrecojr/helloworld:latest
        ports:
        - containerPort: 8080
---
apiVersion: v1  
kind: Service  
metadata:  
  name: example
  namespace: default
  labels:
    app: example
  annotations:
    source: "https://github.com/frankgreco/k8s-config-policy"
spec:  
  selector:
    k8s-app: example
  ports:
  - name: application-port
    port: 8080
    nodePort: 30002
  type: NodePort

Let's deploy and test the application in our cluster with the following commands:

$ kubectl apply -f examples/application.yaml
deployment "example" created  
service "example" created  
$ kubectl get svc
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE  
example      10.0.0.115   <nodes>       8080:30002/TCP   16m  
kubernetes   10.0.0.1     <none>        443/TCP          35m  
$ curl 192.168.99.100:30002/foo
{"msg":"bar"}

At this point, if you have ever been hands on with Kubernetes, everything we have done so far hopefully you have to.

It's time to deploy the bulk of our addition. The following configuration will deploy a pod that hosts the implementation of our custom resource. In this blog post I won't walk through the golang code but it is responsible for the following high level items:

  • Every time a new ConfigPolicy is created, we need to retrofit any new rule to existing resources
  • Every time a new resource is created, we need to check to see if there is a rule or set of rules that apply to it and check against them.
  • If we find a rule violation, we automate the creation of a GitHub issue and remove the resource from the cluster if configured to do so.
---
apiVersion: extensions/v1beta1  
kind: Deployment  
metadata:  
  name: k8s-config-policy
  namespace: default
spec:  
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: k8s-config-policy
    spec:
      containers:
      - name: k8s-audit
        image: fbgrecojr/k8s-config-policy:latest
        imagePullPolicy: Always
$ kubectl apply -f config-policy.yaml
deployment "k8s-config-policy" created  
$ kubectl get pods
NAME                                 READY     STATUS    RESTARTS   AGE  
example-3651323692-50jw1             1/1       Running   0          20m  
k8s-config-policy-1879484915-0c1jm   1/1       Running   0          8s  

Awesome! Our ConfigPolicy controller is deployed. Now it's time to start extending the Kubernetes API.

A ThirdPartyResource is the Kubernetes resource that facilitates the creation of a new API object. Let's see how it changes the Kubernetes api. Know initially you might think we would need to do a kubectl apply on the ThirdPartyResource to create it and you would be right. However, the ConfigPolicy controller that we deployed will automatically do this for us.

---
apiVersion: extensions/v1beta1  
kind: ThirdPartyResource  
metadata:  
  name: config-policy.k8s.io
description: A specification for create rules on configuration files  
versions:  
- name: v1
$ minikube ssh
~$ curl 127.0.0.1:8080/apis/k8s.io/v1/
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "k8s.io/v1",
  "resources": [
    {
      "name": "configpolicies",
      "namespaced": true,
      "kind": "ConfigPolicy"
    }
  ]
}

Woah! What just happened! We just extended the Kubernetes api and added our own API object representing a new Kind.

The stage is set for use to start using our new custom resource. Let's create a ConfigPolicy. In the following example, we declare that for any Service deployed in the default namespace must not specify a NodePort. If they do, that service will not only be removed, but an issue will be opened on GitHub with the included information.

---
apiVersion: k8s.io/v1  
kind: ConfigPolicy  
metadata:  
  name: policy
  namespace: default
spec:  
  apiVersion: v1
  kind: Service
  rules:
  - remove: true
    issue:
      title: Service Exposes NodePort
      body:
        issue: Due to security reasons, you are not allowed to expose a `NodePort` in this namespace. Services must be accessed via a cluster virtual ip address.
        code: "type: NodePort"
        resolution: Please remove this option and redeploy
    policy:
      template: ".spec.type"
      regex: "NodePort"
$ kubectl create -f examples/rule.yaml
configpolicy "policy8" created  
$ kubectl get svc
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE  
kubernetes   10.0.0.1     <none>        443/TCP          35m  

As expected our service has been deleted and the following image shows how our issue will appear in GitHub:

GitHub Issue

Next Steps

This is a project that I am actively working on and will keep this blog post up to date as the project evolves.

I would love to hear your feedback. While to me this concept sounds like it would add value, you do agree? What challenges do you experience that this solution would mitigate?