facebook

Blog

Stay updated

Let's see how to use Infrastructure as Code for the provisioning and the testing in an integration environment
Integration test with Pulumi and Azure Kubernetes Service
Wednesday, January 13, 2021

We are carrying out update activities on some products for one of our clients, gradually migrating to microservice architectures. One of these products has followed the policy of “do not touch what works” for a long time, and as a result, it becomes unmanageable over time. Some classes representing business logic have grown beyond measure, becoming very complex to modify, with insufficient test coverage and a business logic shared between the various domains that coexist in the same codebase.

All these factors made it difficult to maintain the system, and it was therefore decided to gradually evolve the code into different services. Unfortunately, business logic has remained shared between domains, and, currently, this sharing is the most important problem.

To properly divide the logic between various (micro)services and continue to deploy value without introducing regressions or unexpected malfunctions, we need to make sure that the modifications do not change the expected behavior of the software and, as always, the best weapon is to have a codebase well covered by testing.

Unit Tests are very useful for the purpose, but they are not enough to make us confident about the changes we make to the logic. We also need to be able to test the interactions between two or more generated (micro)services, performing them in a temporary environment as close as possible to the production one, called Ephemeral Environment. In other words, we need an Integration Test suite.

To succeed in our aim is very useful the concept of Infrastructure as Code (Iac), which consists of representing the infrastructure (network, virtual machine, load balancer, and connection) in a descriptive way through the code. Following the principle that the same code always generates the same library, the same model of Iac always generates the same environment every time it is applied. By describing our target environment, then, we can share the same configuration between development teams and operations.

We use an Iac framework to recreate an Ephemeral environment in which our services could run, simulating the production environment. This also allowed us to isolate all external dependencies outside the scope of our logic. Recreated the environment, we built an integration test suite that could validate logic and simplify the addition of new features to the product.

Among the many Iac frameworks on the market, Pulumi, was chosen, which provides a CLI, a runtime, libraries, and a hosting service that work together. It allows you to deliver, provision, update and manage infrastructure in the Cloud, supporting multiple languages, including Typescript, Javascript, Python, Go, and.NET, and their native tools, libraries, and package managers. Pulumi works on almost all cloud providers or alternatively on an instance of Kubernetes (k8s); wanting to test locally, we can then use minikube or kind.

A Pulumi program has the following components:

  • Program: a set of files written in one of the languages of your choice;
  • Project: a directory containing the program with the associated metadata;
  • Stack: represents an instance of the project. Often each stack corresponds to different cloud environment.

Let’s see how to deploy with Pulumi on Azure Kubernetes Service (AKS) and launch a test suite in the created environment. You can find the full code here. I slightly modified the classic example of guestbook to deploy in Kubernetes, starting from what you see here.

We create the project Pulumi with the command:

`pulumi new kubernetes-typescript`

In this way, we create a template for a Typescript project that can be deployed on a Kubernetes cluster. You must now install the components with the package manager of the selected language, in our case npm, and initialize a stack.

```
npm install .

pulumi stack init azure
```

In the generated template, we find the file Pulumi.yaml in which the general configurations are set. You can also have specific configurations for stacks, and to set a configuration, just give the command:

```
pulumi config set password <password> --secret
```

Before seeing how to configure the deploy, let’s take a look at the application. It is a simple guestbook that uses a Redis cluster to store messages left by the guests.

To develop the guestbook, we can use a local Redis service, maybe deployed with Docker, but to completely isolate the environment, I preferred to use a mock of Redis:

```
var Redis = require('ioredis-mock')
var redis = new Redis({
  data: {
	'messages': [message]
  }
})
```

Once the guestbook has been developed, we create a docker image that we will load onto an Azure Container Registry (ACR) connected to the Resource Group on which the AKS instance is also present.

Let’s take a look at DockerFile:

```
FROM node:slim

WORKDIR /app

## Copy package.json and package-lock.json before copy other files for better build caching
COPY ["./package.json", "./package-lock.json", "/app/"]
RUN npm install

COPY [ ".", "/app/" ]
EXPOSE 3000
CMD ["npm", "start"]
```

Let’s build the container and load it to ACR.

```
docker build . --tag=guestbook:v1
docker tag guestbook:v1 <my-acr-name>.azurecr.io/guestbook
docker push <my-acr-name>.azurecr.io/guestbook

Now let’s see how Pulumi allows us to describe architecture programmatically. In the project inside the cloud folder, we find two files: index.ts and k8sjs.ts.

index.js

```
import * as pulumi from "@pulumi/pulumi";
import * as k8sjs from "./k8sjs";

const config = new pulumi.Config();

const redisLeader = new k8sjs.ServiceDeployment("redis-leader", {
	image: "redis",
	ports: [6379],
});

const redisReplica = new k8sjs.ServiceDeployment("redis-replica", {
	image: "pulumi/guestbook-redis-replica",
	ports: [6379],
});

const frontend = new k8sjs.ServiceDeployment("frontend", {
	replicas: 3,
	image: "<my-acr-name>.azurecr.io/guestbook:latest",
	ports: [3000],
	allocateIpAddress: true,
});

export let frontendIp = frontend.ipAddress;
```

Let’s notice that index.ts describes the high-level infrastructure, taking advantage of the Servicedeployment class created in k8sjs.ts. Two instances of Redis are defined in the code, of which one leader and one replica. In addition, a deployment of our application is specified, with replication set to 3, so you have redundancy in case of failures.

Let’s see the ServiceDeployment class :

```
import * as k8s from "@pulumi/kubernetes";
import * as k8stypes from "@pulumi/kubernetes/types/input";
import * as pulumi from "@pulumi/pulumi";

/**
 * ServiceDeployment is an example abstraction that uses a class to fold together the common pattern of a Kubernetes Deployment and its associated Service object.
 */
export class ServiceDeployment extends pulumi.ComponentResource {
	public readonly deployment: k8s.apps.v1.Deployment;
	public readonly service: k8s.core.v1.Service;
	public readonly ipAddress?: pulumi.Output<string>;

	constructor(name: string, args: ServiceDeploymentArgs, opts?: pulumi.ComponentResourceOptions) {
    	super("k8sjs:service:ServiceDeployment", name, {}, opts);

    	const labels = { app: name };
    	const container: k8stypes.core.v1.Container = {
        	name,
        	image: args.image,
        	resources: args.resources || { requests: { cpu: "100m", memory: "100Mi" } },
        	env: [{ name: "GET_HOSTS_FROM", value: "dns" }],
        	ports: args.ports && args.ports.map(p => ({ containerPort: p })),
    	};
    	this.deployment = new k8s.apps.v1.Deployment(name, {
        	spec: {
            	selector: { matchLabels: labels },
            	replicas: args.replicas || 1,
            	template: {
                	metadata: { labels: labels },
                	spec: { containers: [ container ] },
            	},
        	},
    	}, { parent: this });

    	this.service = new k8s.core.v1.Service(name, {
        	metadata: {
            	name: name,
            	labels: this.deployment.metadata.labels,
        	},
        	spec: {
            	ports: args.ports && args.ports.map(p => ({ port: p, targetPort: p })),
            	selector: this.deployment.spec.template.metadata.labels,
            	type: args.allocateIpAddress ? ( "LoadBalancer") : undefined,
        	},
    	}, { parent: this });

    	if (args.allocateIpAddress) {
            	this.ipAddress = this.service.status.loadBalancer.ingress[0].ip;
           	 
    	}
	}
}

export interface ServiceDeploymentArgs {
	image: string;
	resources?: k8stypes.core.v1.ResourceRequirements;
	replicas?: number;
	ports?: number[];
	allocateIpAddress?: boolean;
}
```

In this code, there is a class that uses Pulumi libraries to join two concepts of k8s: the deployment and the service.

In the deployment, if passed, the default number of replicates set to 1 is specified. If the flag allocateIpAddress is specified, the service will be configured as “Loadbalancer” and the assigned IP address will be exposed between the fields of the class. Otherwise, the type set to undefined simply exposes the IP address within the k8s cluster.

With Pulumi you can then develop the application and define the infrastructure design using the same language, in this case Javascript/Typescript.

Let’s now deploy our application on Azure, an operation that requires a single terminal command: pulumi up. You can also have a preview before deploying with the pulumi preview command, which shows us the changes that will be made without modifying the current deploy.

Pulumi also offers an integration test suite in Go: let’s see an example of use in which we create an Ephemeral Environment and run tests.

```
package examples

import (
    "bytes"
    "encoding/json"
    "fmt"
    "io"
    "io/ioutil"
    "net/http"
    "os"
    "path"
    "testing"
    "time"

    "github.com/pulumi/pulumi/pkg/v2/testing/integration"
    "github.com/stretchr/testify/assert"
)

func TestGuestbook(t *testing.T) {
    cwd, err := os.Getwd()
    if err != nil {
   	 t.FailNow()
    }

    test := integration.ProgramTestOptions{
   	 Dir:     	path.Join(cwd, "cloud"),
   	 Quick:   	true,
   	 SkipRefresh: true,
   	 ExtraRuntimeValidation: func(t *testing.T, stack integration.RuntimeValidationStackInfo) {
   		 var frontend = stack.Outputs["frontendIp"].(string)
   		 checkHTTPResult(t, frontend)
   		 checkMessageEndpoint(t, frontend)
   	 },
    }
    integration.ProgramTest(t, &test)
}

func checkHTTPResult(t *testing.T, output interface{}) bool {
    hostname := "http://" + output.(string) + ":3000"
    body := doGet(t, hostname, 5*time.Minute)
    if !assert.Contains(t, body, "<html>") {
   	 return false
    }
    return true
}

type dataMessage struct {
    messages []string
}

func checkMessageEndpoint(t *testing.T, output interface{}) bool {
    hostname := "http://" + output.(string) + ":3000/messages"

    message := dataMessage{
   	 messages: []string{"a message"},
    }

    request, err := json.Marshal(message)
    if !assert.Nil(t, err) {
   	 return false
    }

    body := doPost(t, hostname, bytes.NewBuffer(request), 5*time.Minute)

    body = doGet(t, hostname, 5*time.Minute)
    if !assert.JSONEq(t, "{\"messages\": []}", body) {
   	 return false
    }
    return true
}

```

Merely giving the command go test from the command line, you can pull up the environment properly configured, run our tests defined in the option ExtraRuntimeValidation, and finally pull down the environment.

We have seen how, using Pulumi, you can pull up an infrastructure simply by writing Javascript/Typescript code and how to perform integration tests using the tools made available by the same framework.

I wish I’ve intrigued you.

To the next article!

Antonio Tammaro