Just Give Me the Code

Fine... Here you go.

Motivation

API management is an essential component for all production and even non-production services. We at Northwestern Mutual use it to secure 100s of microservices deployed to our Kubernetes clusters every day!

If Kubernetes is at all part of your technology stack, then I'm willing to bet that you and your team have spent countless hours researching and deploying open source tools with the goal of creating an innovative and cloud native architecture.

Then you try to address API management. Ideally, you would like an open source tool that is lightweight, easy to deploy, integrates seamlessly with your newly realized cloud native tech stack, and the list goes on. However, you quickly realize that a tool like this is not as easy to find as let's say Kibana.

There are however many tools on the market, both vendor and open source, that get the job done and so you start to make compromises. Let's explore some of these compromises that you may have made that distanced your tech stack from that robust cloud native architecture you initially desired.

Note: We'll be analyzing these compromises in the form of antiproblems, a style made popular by Mark Dalgarno.

Compromise #1: Deployment

Remember Kubernetes, that lightweight container management tool that is revolutionizing the way applications are deployed and managed? Well, you won't be able to containerize this one. However, you just remembered that you have a whole bunch of credits from your favorite cloud provider and so you'll just deploy it there and it won't cost you any extra.

Compromise #2: Storage

Who doesn't love a new database! Especially one that you're not already using. Of course, you and your team will need to invest a lot of time provisioning new infrastructure to make it highly available but that's okay, you're business features can wait.

Compromise #3: Integration

I know you and your team just spent a lot of time and dedication implementing cloud native projects such as Opentracing, Grafana, and InfluxDB. However, since these open source and Cloud Native Foundation hosted projects won't work for your use case, you'll have to deploy and maintain compatible ones. That's okay though, supporting redundant tools will show that you're a well rounded engineer!

Compromise #4: Application Tier

This compromise is subtle. Depending on where you on your Kubernetes journey, you may have begun to experiment with Kubernetes DNS and label selectors and how to use them to provide dynamic service discovery. This has multiple uses cases with one being canary deployments. However, the tool you've chosen has no context of the architecture that you're using to run the apps that it will proxying to. That's okay though because you'll just write a custom service that will do any dynamic routing once the traffic reaches your cluster. This will of course introduce network latency but a few extra milliseconds never hurt anyone.

Hopefully, if you're still reading, I've motivated the presence of a problem that requires a solution. Over the past couple of years, we at Northwestern Mutual had matured our cloud native tech stack to a point where we could address this problem.

Introducing Kanali!

Overview

Kanali is an extremely efficient Kubernetes ingress controller with robust API management capabilities. Built using native Kubernetes constructs, Kanali gives you all the capabilities you need when exposing services in production without the need for multiple tools to accomplish them.

Here are some of the main highlights:

  • Kubernetes Native: Kanali extends the Kubernetes API by using Third Party Resources, allowing Kanali to be configured in the same way as native Kubernetes resources.
  • Performance Centric: As a middleware component, Kanali is developed with performance as the highest priority! You could instantly improve your application's network performance by using Kanali.
  • Powerful, Decoupled Plugin Framework: Need to perform complex transformations or integrations with a legacy system? Kanali provides a framework allowing developers to create, integrate, and version control custom plugins without every touching the Kanali codebase. Read more about plugins here.
  • User-Defined Configurations: Kanali gives you complete control over declaratively configuring how your proxy behaves. Need mutual TLS, dynamic service discovery, mock responses, etc.? No problem! Kanali makes it easy!
  • Robust API Management: Fine grained API key authorization, quota policies, rate limiting, etc., these are some of the built in API management capabilities that Kanali provides. In addition, it follows native Kubernetes patterns for API key creation and binding making it easy and secure to control access to your proxy.
  • Analytics & Monitoring: Kanali uses Grafana and InfluxDB to provide a customizable and visually appealing experience so that you can get real time alerting and visualization around Kanali's metrics. Find out more here!
  • Production Ready: Northwestern Mutual uses Kanali in Production to proxy, manage, and secure all Kubernetes hosted services.
  • Easy Installation: Kanali does not rely on an external database, infrastructure agents or workers, dedicated servers, etc. Kanali is deployed in the same manner as any other service in Kubernetes. Find installation instructions here
  • Open Tracing Integration: Kanali integrates with Open Tracing, endorsed by the Cloud Native Foundation, which provides consistent, expressive, vendor-neutral APIs allowing you to trace the entire lifecycle of a request. Jaeger, a distributed tracing system open sourced by Uber Technologies, is supported out of the box providing a visual representation for your traces.

In its most basic form, Kanali can function as a Kubernetes ingress controller where upstream services can be both statically and dynamically defined.

Here is an example configuration that statically defines the service that our traffic will be proxied to.

apiVersion: kanali.io/v1  
kind: ApiProxy  
metadata:  
  name: example-one
  namespace: application
spec:  
  path: /api/v1/example-one
  target: /
  service:
    name: example-one
    port: 8080

Here is another configuration that dynamically defines our upstream service. In addition, we've defined an SSL connection between Kanali and our upstream service. Kanali requires the use of the kubernetes.io/tls secret type for this. The presence of a tls.ca key in the secret determines whether this SSL connection is one or two way. Traffic will be routed to a service that contains the two labels defined by this config. Note that there is a special label that can be used to match against an HTTP header on an incoming request. This can be very useful for canary deployments. It states that the value of the specified service label must match the value of a specific HTTP header.

apiVersion: kanali.io/v1  
kind: ApiProxy  
metadata:  
  name: example-two
  namespace: application
spec:  
  path: /api/v1/example-two
  target: /
  service:
    port: 8080
    labels:
    - name: app
      value: example-two
    - name: release
      header: deployment
  ssl:
    secretName: example-two

Kanali can also be used to define mock responses. You'll notice that the following ApiProxy is the same as the first with one additional field. When Kanali is started with the --mock-enabled setting turned on, the mock response set, defined in a ConfigMap, will be used. This allows developers to receive data from a service that may still be under development.

apiVersion: kanali.io/v1  
kind: ApiProxy  
metadata:  
  name: example-three
  namespace: application
spec:  
  path: /api/v1/example-three
  target: /
  mock:
    configMapName: example-three
  service:
    name: example-three
    port: 8080
apiVersion: v1  
kind: ConfigMap  
metadata:  
  name: example-three
  namespace: application
data:  
  response: |-
    [{
        "route": "/health",
        "code": 200,
        "method": "GET",
        "body": {
            "msg": "all systems are up and running"
        }
    },
    {
        "route": "/accounts",
        "code": 200,
        "method": "GET",
        "body": [{
            "id": "1",
            "balance": "$500.00"
        },
        {
            "id": "2",
            "balance": "$1000.00"
        }]
    }]

Let's now explore how Kanali can be used to provide API management for your proxies. Kanali has a very powerful plugin system. Anyone can create a plugin and add it to their proxy. They are decoupled from Kanali and can be version controlled. Each plugin has the opportunity to intercept an HTTP request both before and after the request is proxied to an upstream service. More details around custom plugin development will be the topic of a future post but complete documentation can be found here.

The apiKey plugin is included as part of the Kanali distribution and we are using here to secure this proxy. Of course, now we need an actual API key that clients can use to interact this proxy and so let's look at how to configure that.

apiVersion: kanali.io/v1  
kind: ApiProxy  
metadata:  
  name: example-four
  namespace: application
spec:  
  path: /api/v1/example-four
  target: /
  service:
    port: 8080
    name: example-four
  plugins:
  - name: apiKey
    version: v1.0.0

The ApiKey resource defines an RSA encrypted API key. To assist in API key provisioning, Kanalictl, a CLI tool for Kanali, was created. Here's how we can provision a new API key using this tool. The resulting ApiKey configuration can now be applied to your Kubernetes cluster.

$ kanalictl generate apikey --out-file apikey.yml --name bobs-apikey --namespace application --encrypt_key public.pem
Here is your api key (you will only see this once): 07c31e1cb245cb95ee846b8a55cb5388  
Corresponding Kubernetes config written to apikey.yml  
$ cat apikey.yml
apiVersion: kanali.io/v1  
kind: ApiKey  
metadata:  
  name: bobs-apikey
  namespace: application
spec:  
  data: 6631a207daa769bc26bfafa00de39894926e030c2116afba6ef63f7f0a113fae3d2b6065c23f115280f53dcfa6797cd2d748e25e31a68e6bb442eb9931b00391cd21bc092d683431c2f7a69a80d11fab43216d44e84a371d61233284e48403aadb7c81259615f82f011c1c6e0bbb00376313191e2672e6baab30fffe124b9e89461f03bf0ae8ec519d300d1ac1199ab4df4a52ec1c1bad8d9575633ce7e162e4423b0c4876db23d9bee0b0fc78428e0feed3888c18758b65e730028976438e2978ce958f9a5b4cb9fed9084bf81706ac9azzed17151e9effb28d4ff964f8bba2c50ca72f1dc28671821e83c2e12bcea4acb1096d9430d41fea378587ead7bf8a8e05fbcb72469032f032617b5579b6953db08bcd5f83854ee440bcee0f7336e69e0ec551a42c610796a79fcb9ccf4a352e8f0ea2eb0c13e9b13eaa7c31071e44ba499e284453d6100e9c3eedb123f4230182bb5120905aaa617358e7cf0581d055c340a33e46a54ff611ee438f8b8acc10eb3a6qf1c021e834064d809654b49df856f716f4f3f68d93b9b93698306f6b8ec5c1d205faf444722f6bc685a88ebeeca516585d2bdafa19f639ced0276c59c68786577a3c05c3bee30887564cbf6b91c049b9e289eb69f1e78a80034fd1ae0787f719ee9cda3a68c6589c64c0cffbb2b18a01b77f79223a8c8fbbcb961eb8cf3889bdd4df40e3eaf981096402ca1c

The pattern Kanali uses to enforce authorization is based heavily on the way Kubernetes handles Role Based Access control (RBAC) for its API. By itself, an ApiKey resource does not grant any access to any proxy. In order to do this we need to correlate that API key to an ApiProxy. Kanali does this with the ApiKeyBinding resource. The purpose of this resource is to correlate a set of API keys to a single ApiProxy and define the set of potential fine grained permissions and policies that an API key is granted for that particular proxy.

In the following config, we are giving bobs-apikey access to an ApiProxy named example-four in the application namespace. We are giving this API key global access that this proxy with the exception of the /balance subpath. If an incoming request matches this subpath, bobs-apikey will only be allowed to perform GET operations on this ApiProxy. In addition to these fine grained permissions, we are assigning a quota of 1000 as well as a rate limit of 100 requests per second. Note: quota and rate limiting are alpha features.

apiVersion: kanali.io/v1  
kind: ApiKeyBinding  
metadata:  
  name: example-four
  namespace: application
spec:  
  proxy: example-four
  keys:
  - name: bobs-apikey
    defaultRule:
      global: true
    subpaths:
    - path: /balance
      rule:
        granular:
          verbs:
          - GET
    quota: 1000
    rate:
      unit: seconds
      amount 100

Demo

Note: References to complete documentation for Kanali can be found at the end of this post.

Let's start by deploying Kanali along with Grafana, InfluxDB, and Jaeger locally on our workstation:

$ git clone git@github.com:northwesternmutual/kanali.git && cd kanali
$ minikube start
$ ./scripts/install.sh #wait until all pods are in running state
$ kubectl apply -f ./examples/exampleOne.yaml
$ curl $(minikube service kanali --url --format="https://{{.IP}}:{{.Port}}")/api/v1/example-one
$ open $(minikube service kanali-grafana --url)/dashboard/file/kanali.json
$ open $(minikube service jaeger-all-in-one --url)

Let's deploy the example-four proxy that we explored earlier. Note that this corresponds to the the seventh example in the official tutorial which is why it's referenced below.

$ kubectl apply -f https://raw.githubusercontent.com/northwesternmutual/kanali/master/examples/exampleSeven.yaml
namespace "application" configured  
deployment "example-seven" configured  
service "example-seven" configured  
apiproxy "example-seven" configured  
apikey "bobs-apikey" configured  
apikeybinding "example-seven" configured  
$ curl $(minikube service kanali --url --format="https://{{.IP}}:{{.Port}}")/api/v1/example-seven/foo --insecure
{"code":401,"msg":"apikey not found in request"}
$ curl -H "apikey: 0HfVWylwxchODd3s4A7D9Zoel0Xo83iQ" $(minikube service kanali --url --format="https://{{.IP}}:{{.Port}}")/api/v1/example-seven/foo --insecure
{"msg":"bar"}
$ curl -H "apikey: 0HfVWylwxchODd3s4A7D9Zoel0Xo83iQ" $(minikube service kanali --url --format="https://{{.IP}}:{{.Port}}")/api/v1/example-seven/balance --insecure
{"code":401,"msg":"api key unauthorized"}

So far we've solely focused on Kanali but now let's look at the developer tooling that Kanali ingrates with. I mentioned earlier that Kanali provides Opentracing integration. Jaeger, in my opinion, is the best open source project that implements the Opentracing API to provide an amazing visual experience. Yuri Shkuro, the co-founder of Jaeger, has an amazing presentation on the importance of distributed tracing as well as a brief overview of Jaeger. Find his presentation here. Here is what the trace looks look for one of the requests we executed above:

Jaeger

Here is a snippet from the Grafana dashboard that the Kanali project provides. Grafana is very configurable and hence it is very easy to add and remove panels to enhance your analytics and monitoring experience.

Grafana

Future Work

Kanali is a very active open source project. You can track the project's development features here. If you would like to contribute, we welcome PRs.

Everyone loves cool UIs and Kanali is no exception. A UI is under development, built using the React framework, that will allow you to create API keys, visualize API key permissions, and much more.

Conclusion

We believe that Kanali provides a missing piece in a robust cloud native technology stack. API management should be lightweight, easy to deploy and maintain, and provide developers with robust open source tooling. Since Kanali is native to Kubernetes, it is deployed the same way as all of your other Kubernetes deployments and requires no additional external database. Give Kanali a try and let us know how it has benefitted your cloud native architecture.

Resources

Using Kubernetes Custom Resources to Provide Cloud Native API Management

Kanali GitHub Project

Kanali Tutorial

Kanali Config Documentation