Simon Online

Checking in packages

If there is one thing that we developers are good at it is holy wars. Vi vs. Emacs, tabs vs. spaces, Python vs. R, the list goes on. I’m usually smart enough to not get involved in such low brow exchanges… haha, who am I kidding? (vi, spaces and R, BTW) Recently I’ve been tilting at the windmill that is checking in package files. I don’t mean the files that tell what version of files to check in but the actual library files.

Read More


DevOps and Microservices - Symbiotes

Two of the major ideas de jour in development circles these past few years have been DevOps and Microservices. That they rose to the forefront at the same time was not a coincidence. They are inexorably linked ideas.

Read More


Terraform for a statically hosted AWS site

Just the other day somebody was mentioning to me that they were having trouble setting up a statically hosted site on AWS. That was the kick in the nose I needed to get this article written as it’s been on my back-burner for a while. Terraform makes the whole process easy.

Read More


Downloading a file from S3 using Node/JavaScript

There seems to be very scant examples on how to download a file from S3 using node and JavaScript. Here’s how I ended up doing it using the aws-sdk:

const params = {
    Bucket: 'some bucket',
    Key: 'somefilename.txt'

const s3Data = await s3.getObject(params).promise();
console.log('Content length is:' s3Data.ContentLength);
console.log('Content type is:' s3Data.ContentType);
console.log('Writing to file');
const file = fs.createWriteStream('c:/afile.txt');

Error creating an SQS queue in serverless

I ran into an error today creating an SQS queue in serverless which looked a lot like:

CloudFormation - CREATE_FAILED - AWS::SQS::Queue - SendEmailQueue
An error occurred: SendEmailQueue - API: sqs:CreateQueue Access to the resource is denied..

You would think this error is related to something in IAM, and indeed it may well be. However in my case it wasn’t it was actually that I was giving CloudFormation a mal-formed name for the queue. I would like to think that the developers of SQS would use HTTP code 400 (bad request) to report this error but instead they use 403. So keep in mind that even though it looks like a permissions problem it might be a syntax error.


Dealing with implicit any in node_modules

I like to set noImplicitAny in my TypeScript projects in the hopes it will catch just a few more bugs before they hit production. Problem we had recently was that not every library author agrees with me. In this case the aws-amplify project gave us this error.

(19,44): Parameter 'err' implicitly has an 'any' type.

Well that sucks, they should use noImplicitAny! From our perspective we can fix this by not applying our implicit any rules to library definition files. This can be done by adding

"skipLibCheck": true

To the compilerOptions in our tsconfig.json so that errors in the definition files there were ignored.


Weird JavaScript - Destructuring

I’ve been at this programming game for a long time and I’ve written two books on JavaScript. Still today I ran into some code that had me scratching my head. It looked like

function AppliedRoute ({ component: C, props: cProps, }) {

I was converting some JavaScript to TypeScript and this line threw an linting error because of implicit any. That means that the type being passed in has no associated type information and has been assumed to be of type any. This is something we’d like to avoid. Problem was I had no idea what this thing was. It looked like an object but it was being built in the parameters?

Read More


Application Insights Alerts

Application Insights is another entry in the vast array of log aggregators that have been springing up in the last few years. I think log aggregators are very important for any deployed production system. They give you insight into what is happening on the site and should be your first stop whenever something has gone wrong. Being able to search logs and correlate multiple log streams give you just that much more power. One feature I don’t see people using as much as they should is basing alerting off of log information. Let’s mash on that.

Read More


Application Insights Cloud Role Name

Logging is super important in any microservices environment or really any production environment. Being able to trace where your log messages are coming from is very helpful. Fortunately Application Insights have a field defined for just that.

Read More

2017-06-07 |

Running Kubernetes on Azure Container Services

This blog post will walk through how to set up a small Kubernetes cluster on Azure Container Services, manage it with Kubernetes and do a rolling deployment. You may think that sounds like kind of a lot. You’re not wrong. Let’s dig right in.

Note: If you’re a visual or auditory learner then check out the channel 9 video version of this blog post.

We’re going to avoid using the point and click portal for this entire workflow, instead we’ll lean on the really fantastic Azure command line. This tool can be installed locally or you can use the version built into the portal.

Azure CLI in Portal

Using the commandline is great for this sort of thing because there are quite a few moving parts and we can use variables to keep track of them. Let’s start with all the variables we’ll need, we’ll divide them up into two sets.


The first set of variables here are needed to stand up the resource group and Azure Container Service. The resource group is called kubernetes which is great for my experiments but not so good for your production system. You’ll likely want a better name, or, if you’re still governed by legacy IT practices you’ll want a name like IL004AB1 which encodes some obscure bits of data. Next up is the region in which everything should be created. I chose Australia South-East because it was the first region to have a bug fix I needed rolled out to it. Normally I’d save myself precious hundreds of miliseconds by using a closer region. Finally the DNS_PREFIX and CLUSTER_NAME are used to name items in the ACS deployment.

Next variables are related to the Azure container registry frequently called ACR.



Here we define the name of the registry, the URL, and some login credentials for it.

With the variables all defined we can move on to actually doing things. First off let’s create a resource group to hold the twenty or so items which are generated by the default ACS ARM template.

az group create --name $RESOURCE_GROUP --location $REGION

This command takes only a few seconds. Next up we need to create the cluster. To create a Linux based cluster we’d run

az acs create --orchestrator-type=kubernetes --resource-group $RESOURCE_GROUP --name=$CLUSTER_NAME --dns-prefix=$DNS_PREFIX --generate-ssh-keys

Whereas a Windows cluster would vary only slightly and look like:

az acs create --orchestrator-type=kubernetes --resource-group $RESOURCE_GROUP --name=$CLUSTER_NAME --dns-prefix=$DNS_PREFIX --generate-ssh-keys --windows --admin-password $DOCKER_PASSWORD

For the purposes of this article we’ll focus on a Windows cluster. You can mix the two in a cluster but that’s a bit of an advanced topic. Running this command takes quite some time, typically on the order of 15-20 minutes. However, the command is doing quite a lot: provisioning servers, IP addresses, storage, installing kubernetes,…

With the cluster up and running we can move onto building the registry (you could actually do them both at the same time, there is no dependency between them).

#create a registry
az acr create --name $REGISTRY --resource-group $RESOURCE_GROUP --location $REGION --sku Basic
#assign a service principal
az ad sp create-for-rbac --scopes /subscriptions/5c642474-9eb9-43d8-8bfa-89df25418f39/resourcegroups/$RESOURCE_GROUP/providers/Microsoft.ContainerRegistry/registries/$REGISTRY --role Owner --password $DOCKER_PASSWORD
az acr update -n $REGISTRY --admin-enabled true

The first line creates the registry and the second sets up some credentials for it. Finally we enable admin logins.

Of course we’d really like our Kubernetes cluster to be able to pull images from the registry so we need to give Kubernetes an idea of how to do that.

az acr credential show -n $REGISTRY

This command will dump out the credentials for the admin user. Notice that there are two passwords, either of them should work.

kubectl create secret docker-registry $REGISTRY --docker-server=https://$DOCKER_REGISTRY_SERVER --docker-username=$REGISTRY --docker-password="u+=+p==/x+E7/b=PG/D=RIVBMo=hQ/AJ" --docker-email=$DOCKER_EMAIL

The password is the one taken from the previous step, everything else from our variables at the start. This gives Kubernetes the credentials but we still need to instruct it to make use of them as the default. This can be done by editing one of the configuration ymls in Kubernetes.

kubectl get serviceaccounts default -o yaml > ./sa.yaml

Regreives the YML for service accounts. In there two changes are required: first removing the resource version by deleting resourceVersion: "243024". Next the credentials need to be specified by adding

- name: prdc2017registry

This can now be sent back to Kubernetes

kubectl replace serviceaccounts default -f ./sa.yaml

This interaction can also be done in the Kubernetes UI which can be accessed by running

kubectl proxy

We’ve got everything set up on the cluster now and can start using it in earnest.

Deploying to the Cluster

First step is to build a container to use. I’m pretty big into ASP.NET Core so my first stop was to create a new project in Visual Studio and then drop to the command line for packaging. There is probably some way to push containers from Visual Studio using a right click but I’d rather learn it the command line way.

dotnet restore
dotnet publish -c Release -o out
docker build -t dockerproject .

If all goes well these commands in conjunciton with a simple Dockerfile should build a functional container. I used this docker file

FROM microsoft/dotnet:1.1.2-runtime-nanoserver
WORKDIR /dockerProject
COPY out .
ENTRYPOINT ["dotnet", "dockerProject.dll"]

This container can now make its way to our registry like so

docker login $DOCKER_REGISTRY_SERVER -u $REGISTRY -p "u+=+p==/x+E7/b=PG/D=RIVBMo=hQ/AJ"
docker tag dockerproject
docker push

This should upload all the layers to the registry. I don’t know about you but I’m getting excited to see this thing in action.

kubectl run dockerproject --image

A bit anti-climactically this is all that is needed to trigger Kubernetes to run the container. Logging into the UI should show the container deployed to a single pod. If we’d like to scale it all that is needed is to increase the replicas in the UI or run

kubectl scale --replicas=3 rc/dockerproject

This will bring up two additional replicas so the total is three. Our final step is to expose the service port externally so that we can hit it from a web browser. Exposing a port of Kubernetes works differently depending on what service is being used to host your cluster. On Azure it makes use of the Azure load balancer.

kubectl expose deployments dockerproject --port=5000 --type=LoadBalancer

This command does take about two minutes to run and you can check on the progress by running

kubectl get svc

With that we’ve created a cluster and deployed a container to it.

##Bonus: Rolling Deployment

Not much point to having a cluster if you can’t do a zero downtime rolling deployment, right? Let’s do it!

You’ll need to push a new version of your container up to the registry. Let’s call it v2.

docker tag dockerproject
docker push

Now we can ask Kubernetes to please deploy it

kubectl set image deployments/dockerproject

That’s it! Now you can just watch in the UI as new pods are stood up, traffic rerouted and the old pods decomissioned.


It is a credit to the work of mnay thousands of people that it is so simple to set up an entire cluster and push an image to it forget that we can do zero downtime deployments. A cluster like this is a bit expensive to run so you have to be serious about getting good use out of it. Deploy early and deploy often. I’m stoked about containers and orchestration - I hope you are too!

Oh, and don’t forget to tear down you’re cluster when you’re done playing.

az group delete --name $RESOURCE_GROUP