Stay updated

Let’s see together why, as developers, we should use containers
Who said Docker only serves in production?
Wednesday, February 27, 2019

In the last year, I interested myself in Docker and to the whole ecosystem around it, and I finally arrive to Kubernetes to organize containers. Against all odds, I get into this argument and I started, as I usually do, to share my experiences during the events I hold sessions in.

I’d like to clarify the “against all odds”: I believed that Docker & company were tools that only support applications’ release stage. Thus, to avoid making the same mistake of those who, like me, consider boring the preparation of the production environment, I want to tell you the happy discovery: containers are a wonderful asset for the application development stage too, and they simplify considerably the next stages.

I explain firstly why, as a developer, I should use containers:

  1. To speed up the entry of a new person in the team;
  2. To delete conflicts between applications;
  3. To allow simpler and faster releases;
  4. Well to… make the “microservices” management sustainable.

To understand the origin of this list, let’s swiftly investigate what Docker is. We are talking about an open-source platform for the management of the life cycle of a thing called “container”, which aim is to simplify the applications’ creation, release and execution.

We start from a read-only template called “image”, that is essentially a level-file system used to share common files. If you are developers, think about a class definition. A class defines a “template” from which create objects. It’s the same: I start from a Docker image and “istanting it” I create a container. To continue with the analogy, a container is an object image instance, with its inner status and its life cycle.

To be sure you are understanding: you will surely have familiarity with the concept of “virtual machine”, where on an operating system installed on a physical machine, you can make run another (or the same) operating system on abstracted virtualization of the same hardware. And you probably also know why virtual machines are so useful: they are isolated environments, that can run on the same hardware, easy to backup and to replace, on which you can install everything you want.

Their utility is also their main limit: you need to install inside them the operating system and use up space and memory and the starting time is dilated. In a production environment, they are a very convenient solution to save money and control every single application. In a development environment, working with virtual machines requires many space and memory: even copy a VM may require much time. A container solves exactly this problem: to have a fast and isolated environment, that consumes less space and memory because it shares the operating system that hosts it.

Containers grow in Linux environment and, not long ago, they needed a Linux virtual machine to be used. Since it becomes the trend of the moment, because it solves a problem and not for fashion, Microsoft starts to provide native tools to run the containers too.

To speed up the entry of a new person in the team

Suppose you have an application, that needs one or more database, a cache, a backend and a frontend to work, that’s to say the minimum necessary. A new person is added to the team and, beyond him/ her competencies, he/she has three possibilities on the first day of job:

  1. To install all needed on his/her PC and configure it;
  2. To copy and configure VM;
  3. Download source codes, run a command like “docker-compose up” and wait for a while to see his/her applications run (the first time it is needed to run the images).

You surely understand that the third option represents the scenario that implements Docker.

To delete conflicts between applications

If my code and libraries I use are located in an isolated environment, that shares the operating system, I solved the problem of dependences to same libraries of different applications or to different parts of the same application. If you didn’t utter “wow!”, you have probably never work with Javascript and NPM, but even with .NET the situation is not better.

To allow simpler and faster releases

To create again locally a production environment that generates a problem, is much more simple with containers, they simplify greatly all the applications releases we will do.

Well to… make the “microservices” management sustainable

Hoping that you have the right definition of microservices and that you chose an architecture that solves and doesn’t create problems, then the only acceptable way you have to manage them in all environments is with containers.

Let’s make an example

We create together a DEVELOPMENT environment with Docker and we will focus on the frontend.

Create an Angular application with CLI (ng new frontend) in a folder you choose. In the project root (that’s to say: the frontend folder), create a new file, which name would be Dockerfile. A Dockerfile allows you to decrypt the operations needed to create a Docker image. You should start with a basic image, that contains the tools we need to work. In our case we need Node.JS, then the first line of the script would be:

FROM node:10-alpine

The instruction FROM, indicates the image we need to start from. The image will be download from a public registry called Docker Hub, if it isn’t present locally. Starting from this image, we create a folder called “app”. To run a command inside a container during the creation of an image, you need to use the instruction RUN:

RUN mkdir app

Since all operations we need to do have to be done in this folder, we set the folder “app” as the current folder, using the instruction WORKDIR:


At this point, we copy our code into the folder, using the instruction COPY, from current local folder to container local folder:

COPY . .

In this way, all the Angular folder will be copied in the container, but there are some folders we don’t need, such as the folder node_modules: we want the libraries we download to be compatible with the operating system on which the container runs, and not with our local one. If you are familiar with the world of Node/JavaScript/NPM, you surely know that this is not a detail. We create then a file called .dockerignore, which permits us to list the folders we don’t need to be copied in our image. In this file we insert only node_module:


Going back to our Dockerfile, we should execute the download of libraries needed by the project:

RUN npm install

We are almost ready: at this point, our image contains all the files we need. As of last instruction, we indicate the command we want to be run when our image is used to create a container, using the instruction CMD:

CMD $(npm bin)/ng serve --host

The instruction $(npm bin) is used to make return the path of the folder that contains our dependencies and, as you probably know, the CLI in Angular is a development dependency defined in the package.json. The option –host is used to avoiding that the application answers on localhost, because it will be the container localhost and not ours. If we specify, we can readdress (on the right port) the requests that come from our host machine to the http development server, that run our application with the CLI in the container.

Summarizing, this is our development Dockerfile:

FROM node:10-alpine
RUN mkdir app
COPY . .
RUN npm install
CMD $(npm bin)/ng serve --host

We create now our image from this script with the following command:

docker build -t frontend:dev-v1 .

The command docker build is used to build a docker image, the -t option allows us to tag the image with a name we choose, with the form name:version (in this case, we are stating that it is a development frontend image with version number v1, but you can use the nomenclature you prefer). As you can see, we didn’t indicate the name of the Dockerfile, because we used the standard name. If you decide to use a custom name, you only need to indicate with the option -f. Lastly, there’s a point indicating the current context, to which refers the execution of the command in the script. With the point, you are stating that the current context is the folder from which you are running the command docker build. You can indicate a different folder, if you need it.

The result of the execution is as follows:

As you can see, for every instruction, a temporary container is created to run the command, until it arrives at the final image. With the created image, you can start your container with the following command:

docker run frontend:dev-v1

However, if you run the container now, it would not give you the interactivity you need during the development, especially if we want the local modification of the code to be spread in the container. Moreover, the Angular application is not reachable, because it is executed on the port 4200, which is not exposed to the container.

The exposure of the port is very simple: you only need to indicate the option -p 4200:4200 (the local 4200 toward the container 4200, obviously, the local one can be any). To share the code on your file system with the one of the containers, requires the creation of an object, that is called Volume in Docker, and you can do that on the same command with the option -v:

docker run -it -p 4200:4200 -v $(pwd)/src:/app/src frontend:dev-v1

The result is the following:

The option -it allows you to have an interactive modality with the terminal TTY, very convenient to run commands in the container, like CTRL+C.

Thanks to this configuration, you have the same development experience you would have locally, but with all the advantages of containers use! In the next articles, we will see how to add the backend and create a script docker-compose, that will do the dirty work for us!

Happy coding!