Introducing Eric Koslow, cofounder of Lattice. At Lattice, Eric creates software to help companies set and manage their goals. Eric will be writing a series of guest posts on setting up Kubernetes on AWS. -Mackenzie

Here at Lattice, we recently migrated from a Heroku-based infrastructure to running our own Kubernetes cluster on AWS.

Why leave Heroku?

Lattice's code is written as a series of microservices. Each service is doing a small part of the greater whole. This way, each service is easy to understand, decoupled, and easy to change without affecting the greater whole.

But, as of this article, running microservices on Heroku is near impossible without an enterprise account. The reason? Latency. Plain and simple, the communication between Heroku applications takes forever. At least in terms of computers. On average we were seeing a latency of 50-100ms on every request.

This meant end users would see request times of 100-1200ms after all of the services communicated with one another. This made Lattice feel very sluggish and unresponsive. Once we got an email saying a customer was leaving the platform due to our slow speed, we knew it was time to change.

Why switch to K8S?

Our friends at StrongIntro were already using Kubernetes on GCE and had only good things to say. Then we met the Redspread team -- they too suggested we switch, and even offered to help out. How could we say no at that point?

Why AWS?

We already had a hybrid infrastructure going on between Heroku and AWS. We used RDS for our databases, SNS + Lambda for asynchronous tasks, and SES for our emails. We didn't want to lose all of the tools AWS gave us, but also wanted the ease that Kubernetes promised.

Setting up the Cluster

The first thing to do is set up a K8S cluster. After doing some research, the easiest way to setup a new cluster that fit our needs was to use CoreOS's kube-aws script.

Note: For me, the "Launch Stack" button didn't work for kube-aws, but the command line did.

kube-aws reads a cluster.yaml file, creates a Cloudformation on AWS, then runs it. This is what our cluster.yaml file looks like:

# Unique name of Kubernetes cluster. In order to deploy
# more than one cluster into the same AWS account, this
# name must not conflict with an existing cluster.
clusterName: kubernetes-production

# Name of the SSH keypair already loaded into the AWS
# account being used to deploy this cluster.
keyName: jam-master

# Region to provision Kubernetes cluster
region: us-west-1

# Availability Zone to provision Kubernetes cluster
availabilityZone:

# DNS name routable to the Kubernetes controller nodes
# from worker nodes and external clients. The deployer
# is responsible for making this name routable.
externalDNSName: kube-prod

# Instance type for controller node
controllerInstanceType: m3.medium

# Disk size (GiB) for controller node
controllerRootVolumeSize: 30

# Number of worker nodes to create
workerCount: 2

# Instance type for worker nodes
workerInstanceType: m4.large

# Disk size (GiB) for worker nodes
workerRootVolumeSize: 30

# Location of kube-aws artifacts used to deploy a new
# Kubernetes cluster. The necessary artifacts are already
# available in a public S3 bucket matching the version
# of the kube-aws tool. This parameter is typically
# overwritten only for development purposes.
artifactURL: https://coreos-kubernetes.s3.amazonaws.com/

# CIDR for Kubernetes VPC
vpcCIDR: "10.0.0.0/16"

# CIDR for Kubernetes subnet
instanceCIDR: "10.0.0.0/24"

# IP Address for controller in Kubernetes subnet
controllerIP: 10.0.0.50

# CIDR for all service IP addresses
serviceCIDR: "10.3.0.0/24"

# CIDR for all pod IP addresses
podCIDR: "10.2.0.0/16"

# IP address of Kubernetes controller service 
# (must be contained by serviceCIDR)
kubernetesServiceIP: 10.3.0.1

# IP address of Kubernetes dns service 
# (must be contained by serviceCIDR)
dnsServiceIP: 10.3.0.10

Our cluster configuration is made for a production environment and uses slightly bigger EC2 instances than the defaults. Let’s go over some of the important values here that we found to be undocumented:

keyName: This is the name of an existing EC2 Key Pair. When we first ran this script we thought it would create it for us. We were wrong.

externalDNSName: This is the name of the DNS record that kube-aws will create certificates for. When accessing your cluster API you must use this DNS record to be properly authenticated -- this is where you connect to your cluster.

Once that is all in place, one just runs kube-aws up and in about 10 minutes you'll have your own K8S cluster!

Before we can access our cluster we need to add that externalDNSName to our /etc/hosts (Linux) or /private/etc/hosts (OS X) file.

# Add the IP of the generated controller
52.8.X.X    kube-prod

Finally to test it all out run:

kubectl --kubeconfig=clusters/kubernetes-production/kubeconfig get nodes

And you should see something like:

NAME                                       LABELS                                                            STATUS    AGE
ip-10-0-0-195.us-west-1.compute.internal   kubernetes.io/hostname=ip-10-0-0-195.us-west-1.compute.internal   Ready     3h
ip-10-0-0-196.us-west-1.compute.internal   kubernetes.io/hostname=ip-10-0-0-196.us-west-1.compute.internal   Ready     3h

What to do from here

Once you have your cluster up in running, time to get your code running there. To that you must:

  • Dockerize your code
  • Deploy those containers to a container registry
  • Write your Kubernetes configurations
  • Deploy it all

I'll be writing further posts on how to connect the new cluster to RDS as well as how to use AWS's Container Registry.