Just Give Me the Code

Fine...here you go.

Motivation

One of the most frustrating things a developer has to deal with is not having a productive, local environment that they can develop with. Normally, this problem is isolated to new developers on a team. However, as teams start to use Docker and Kubernetes to deploy their applications, this problem can even leave experienced developers needing to rethink how they develop locally.

One of the reasons that Docker is so amazing is that you can always be guaranteed that as long as you have a system with the Docker engine running, your container will behave the exact same way every single time! Because of this, if a developer was able to make sure an application preformed correctly in a Docker container on their local machine, then they could be guaranteed it would function the same way in production.

While this is amazing, it is not a good idea to run Docker containers in production without an orchestration tool like Kubernetes. However, when using Kubernetes, there are extra things that must be taken into consideration to ensure that an application acts the same way in production as it does locally.

Requirements

Before we start our demo, let's make sure we have everything that we need installed. Currently this solution only supports Mac OSX and Linux but I plan on posting a solution for Windows shortly.

We need three tools for this solution:

  1. Minikube. Find full installation instructions here.
  2. Kubectl. This is the command line interface for Kubernetes. Find full installation instructions here.
  3. Virtualbox. This is used as the VM provider for Minikube. Note that Minikube supports other VM providers. Full installation instructions can found here.
  4. Visual Studio Code. Find full installation instructions here.

Demo

Let's explore a common use case. Suppose we have an application that depends on a configuration that is read from the file system. On a developers machine, we might run our application container with the command:

docker run -v ./config/:/app/config/ owner/image:latest  

However, with Kubernetes, we may be dynamically mounting this configuration as a volume generated from a ConfigMap. Hence, we need a way to do this locally without adding unnecessary overhead.

Minikube is a tool that makes it easy to run Kubernetes locally. However, Minikube "out of the box" is not enough to give us an effective development environment. Why? Consider we have just added some code to our application and we would like to test it. In order to do this we would have to build a new Docker image, push it to our registry, ssh into Minikube, pull the new image, and restart our Kubernetes Pod. We can streamline this a bit by sharing the Docker daemon on our host with the Minikube vm. We can do that by executing the following command:

eval $(minikube docker-env)  

However, even with this configured, we would still have to build a new Docker image, ssh into Minikube, pull the new image and restart our Kubernetes Pod.

So how do we create an effective, lightweight, and productive development environment using Minikube? Let's walk through a detailed example.

The example that we'll be working with is a Node.js web server. Let's take a look at the code:

var express = require('express');  
var config  = require('config')  
var app     = express();

app.get('/', function (req, res) {  
   res.json(config.get('message'));
})

var server = app.listen(8080, function () {  
   var address = server.address()
   console.log("Example app listening at http://%s:%s", address.address, address.port)
});

Our package.json includes an npm script that uses nodemon to watch our application for changes.

{
  "name": "heymars",
  "version": "0.0.1",
  "description": "heymars description",
  "main": "index.js",
  "scripts": {
    "dev": "nodemon --legacy-watch --debug=5858 ./index.js"
  },
  "devDependencies": {
    "nodemon": "1.11.0"
  },
  "dependencies": {
    "config": "1.25.1",
    "express": "4.15.2"
  }
}

Next, here is the Dockerfile that we will use to build an image for our application. Note that we do not include a command that would run when we start the container so we'll have to remember to include on when we start it.

FROM node:6.9.5

WORKDIR /app

COPY ./ /app/

RUN cd /app \  
    && npm install

Since we are developing inside of a Kubernetes cluster, we need a Kubernetes configuration that deploys our application. Let's dissect the different Kubernetes resources that we'll use for our application:

ConfigMap: This is what we'll use to dynamically inject the configuration into our application container.
Deployment: A Kubernetes deployment defines a desired state that the Kubelet will make sure our Pod will maintains. A Pod is an abstraction of our application container. For your own application, replace by hello world Docker image with your own. Note that as part of our Pod spec, we include the command we want our container to start with as well as what volumes we want mounted into our container. Note that we are mounting a directory from our local machine where our code lives to inside our Pod.
Service: A Service is an abstraction of our Pods and is the front door to our application. As part of our service we are exposing two different ports. The first being our application port and is the one which we will invoke our web service from. The second is the debugging port used by Node which we will use in a second.

---
kind: ConfigMap  
apiVersion: v1  
metadata:  
  name: heymars
  namespace: default
data:  
  default.json: |-
    {
        "message": "Hey Mars!"
    }
---
apiVersion: extensions/v1beta1  
kind: Deployment  
metadata:  
  labels:
    app: heymars
  name: heymars
  namespace: default
spec:  
  replicas: 1
  selector:
    matchLabels:
      k8s-app: heymars
  template:
    metadata:
      labels:
        k8s-app: heymars
    spec:
      containers:
      - name: hello-world
        imagePullPolicy: Always
        image: fbgrecojr/helloworld:node
        command:
        - npm
        args:
        - run
        - dev
        ports:
        - containerPort: 8080
        - containerPort: 5858
        volumeMounts:
        - name: code
          mountPath: /app
        - name: configuration
          mountPath: /app/config
      volumes:
      - name: code
        hostPath:
          path: /Users/gre9521/Documents/projects/kubernetes/minikube
      - name: configuration
        configMap:
          name: heymars
---
apiVersion: v1  
kind: Service  
metadata:  
  name: heymars
  namespace: default
  labels:
    app: heymars
spec:  
  selector:
    k8s-app: heymars
  ports:
  - name: app-port
    port: 8080
    nodePort: 30005
  - name: debug-port
    port: 5858
    nodePort: 30006
  type: NodePort

We include a .dockerignore file so that we ensure our dependencies are installed inside our Docker container and not used from the host.

node_modules/  

In this demo, we'll be using Visual Studio Code as our text editor because it allows us to do live debugging of our Node application. In order to enable live debugging, we need to tell Visual Studio Code what endpoint to listen to for debugging events. To do this we need a .vscode/launch.json file. The value of the address field is the result of executing $ minikube ip and the port is what we are exposing in our Kubernetes Service for debugging.

{
    "version": "0.2.0",
    "configurations": [
        {
            "type": "node",
            "request": "attach",
            "address": "192.168.99.100",
            "name": "Attach to Process",
            "port": 30006
        }
    ]
}

Time to start Minikube and deploy our application! The first thing we'll do is start Minikube. This might take a few minutes.

$ minikube start
Starting local Kubernetes cluster...  
Starting VM...  
SSH-ing files into VM...  
Setting up certs...  
Starting cluster components...  
Connecting to cluster...  
Setting up kubeconfig...  
Kubectl is now configured to use the cluster.  

Now that Minikube is up and running, the next step is to deploy our application. We'll do this by applying the Kubernetes configuration we have above to our cluster.

$ kubectl apply -f kubernetes.yaml
configmap "heymars" created  
deployment "heymars" configured  
service "heymars" configured  

Now that our application is running let's test it to make sure that it's working:

$ curl $(minikube ip):30005
"Hey Mars!"

Sweet, it works! However, no local development setup is complete without a way to quickly have changes reflected in our running application. To do this, let's attach to our Pod's stdout:

$ kubectl attach $(kubectl get pods -o jsonpath='{.items[*].metadata.name}' | grep heymars)

Now, let's make a change. We'll modify our express router and create a log when we receive a new request:

app.get('/', function (req, res) {  
   console.log('new request!');
   res.json(config.get('message'))
})

The moment we save, the file, we'll see our app restart and when we receive a new request, our log message appears:

If you don't see a command prompt, try pressing enter.  
[nodemon] restarting due to changes...
[nodemon] starting `node --debug=5858 ./index.js`
Debugger listening on [::]:5858  
Example app listening at http://:::8080  
new request!  

Awesome! Let's take this one step further. Suppose we want to debug an incoming request. To do this, let's first open our project in Visual Studio Code. Select the debug view on the left column (fourth down) and then attach ourselves to the debug process by clicking on the play button. Now just add a debug statement and try invoking the application.
visual studio code

The moment we try to invoke our application, our request will be paused, and we will have access to a full debugging experience through Visual Studio Code.

visual studio code

Now that is a complete developer experience!