Docker: let’s containerize! 🐳

Kishan Kesari Gupta
TechVerito
Published in
9 min readFeb 10, 2023

--

Technology for containerization has advanced tremendously recently. It refers to running an application in an isolated, pre-configured environment after packaging up all necessary components, such as configuration files, runtime dependencies, and settings. It is similar to running separate applications in completely distinct virtual machines. But the containers are often small & light since the application runs on the host operating system and uses features the host already provides.

The containers are formed from images and images are built from Dockerfiles or downloaded from the container registry. Therefore the image is sharable, and multiple containers are created based on them. Hence containers are nothing but a running instance of an image.

Images

Docker Images serve as the template for the application environment you create. It includes definitions for all the configuration files, binaries, libraries, etc. needed to execute the application. These images are read-only, so you cannot edit them. On top of that, these images are always immutable and can transform into running instances known as containers.

Docker Images come in two varieties. The first category is Base Images, which are pre-built and may be downloaded or retrieved from registries. Another option is customized images, which build application-specific environments from base images.

Registries are repositories like GitHub that include pre-built Docker Images such as MongoDB, Ubuntu, Node, Ruby, etc. The docker pull command can be used to retrieve images from these repositories.

Let us think about creating a Docker image and launching a container to run a web application. Here are a few layers of instructions for the Dockerfile, which will wrap your application in an image.

  • Dockerfile and its layers

Dockerfiles are employed to create Docker Images. These offer detailed instructions on performing various tasks, like fetching a base image and another for executing installation commands, etc. Every instruction utilizes the preceding layer’s data to construct a subsequent Image layer. Hence Dockerfile structure is as follows:

FROM node

WORKDIR /app

COPY package.json .

RUN npm install

COPY . .

EXPOSE 80

CMD ["npm", "start"]

Here is an example of a Dockerfile structure, where many layers of instructions run one after the other when the docker build command executes.

  • Execute command to build an image

The build command is to build an image based on Dockerfile. We specify the build context and the path to a folder that has Dockerfile. We can also give a name to identify and provide a tag to version the image.

The docker build command syntax is explained in the graphic below.

Using the above command syntax, create an image by providing the name “mission-service” & version 1, which makes it easier to recognize the image by service name.

$ docker build -t mission-service:1 .

The command above will create the docker image by giving it the name mission-service and tag 1. If you wish to see the built images and their IMAGE ID, follow the below command will display the constructed image in tabular format, along with a description of each image, such as repository image id, tag, size, etc.

$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE
mission-service 1 dea271a56af4 10 days ago 10MB

After the image build process completes. Let us transform an image into running instances known as containers.

Containers

Now that we have a basic understanding of images. Let’s move on to containers. An image runtime instance is known as a container. You can start one or more containers from a single image, similar to initiating a virtual machine (VM) from a virtual machine template. Containers are quicker and lighter than VMs, which is the main distinction between them. Containers share the kernel with the host they are running on as averse to running a full-blown OS like a VM.

Similar to how shipping containers enable the transportation of products by any medium, regardless of the contents, software containers serve as a standard unit of software that can include various codes and dependencies. Software may be deployed across environments with little to no change thanks to the containerization process for developers and IT experts.

A container is a unit often used to represent the running instance of your application services. Let’s see how we can create a container from an image and run our application. Therefore the syntax to run containers is explained in the graphic below.

With the above command syntax, creating a container by providing the name “mission-container” runs in detached mode and will get removed when stopped only when the — rm flag is passed. Here the last parameter is the image name with the version(tag) required to create a container from a specific image.

Additionally, we may broadcast the port number to this container, enabling it to listen to all incoming requests on that port and respond appropriately.

$ docker run --name mission-container --rm -d -p 3000:80 mission-service:1

The above command will create the docker container with the name mission-container run in detached mode on exposed port 3000 and will remove it when stopped. Therefore let’s see the list of running containers in tabular format and their description, such as container id, ports, status, etc. Therefore, use the above command with the -a argument at the end to view all containers (running and stopped).

$ docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1124e2d8e54d mission- docker- 8 Up 7 0.0.0.0: mission-
service:1 entrypoint seconds seconds 3000 -> container
.s… ago 80/tcp

The above running mission container will generate data that must be stored in a database like MySQL, Postgres, MongoDB, etc. For storing data, we require an additional functioning database container. If data is a concern, dealing with containers becomes a little more challenging. Because containers(database/service) are stateless and isolated, they can read and write data. Therefore every piece of data they produce will be lost when the container goes down. To avoid that, we can use volumes to persist data. The volumes section below will connect the docker volume with MYSQL running container to survive mission data.

For volumes, our container folder is also mirrored to a local system folder, although we are unsure of its exact location. Instead of adding information from within our host computer to that folder, we only intend for it to preserve any data produced by the container in case we were to start a new one after the previous one was deleted.

Volumes

Volumes are directories on your host machine’s hard disc that the docker is aware of, which are mounted, made accessible, or mapped into containers.

The recommended way to store data for Docker containers and services is through volumes that are reliant on the docker file system. Docker loads the read-only image layer, adds a read-write layer, and mounts volumes to the container filesystem.

Data persisting outside the container is gained by utilizing Docker volumes, which enables sharing or backup of the data.

Using the docker volume set of commands, Docker enables us to manage volumes. We may explicitly name a volume (named volume) or let Docker choose a random name (anonymous volume).

The docker run command’s syntax for launching a container by linking named volumes to a particular MySQL data storage location enables it to survive missions container data if the container goes down. It creates as if the volume wasn’t there previously.

Using the above syntax will create a volume and link it to the MYSQL container. It will create a running container with a connected volume if there isn’t an MYSQL container already.

$ docker run --name mysql -d -v mysql:/var/lib/mysql -p 3306:3306 mysql:8

This above command will create a MySQL container and attach it to a MySQL volume that persists all data stored in the MySQL database even if the container goes down.

After finishing the volume stage, we must concentrate on container-to-container communication since a mission container must connect with a MySQL container to save and retrieve mission data. The mission container and the MySQL container must be on the same network for feasible communication. We must examine the part below to operate them on the same network.

Networks

Despite being isolated by default, containers can still be linked and communicate with one another and the outside world by sending requests. Now it’s easy to send requests to the outside world.

When there is container-to-container communication, things get more complicated. The first alternative was to utilize the IP address found after looking up the container’s IP address required for communication. The drawback is that things might change. There is usually a tonne of pointless effort involved, and the offer is not practical.

Therefore, setting up a Docker network and including all containers that should be able to communicate with one another in that network is the most practical and advised solution. The IP addresses of the underlying containers may then be found automatically by Docker if you only provide the container name as an address.

Docker does not automatically construct networks, in contrast to volumes. We need to create a network manually and then assign it to a functioning container.

Let’s set up a mission network so that the mission container and MySQL container can communicate. And the mission container can save and retrieve the data for the missions.

A syntax command for building networks is given below.

Let’s construct a network and check its details.

$ docker network create mission-network

To check network details in tabular format the below command would help.

$ docker network ls

NETWORK ID NAME DRIVER SCOPE
765gfdcvh789 mission-network bridge local

It is now time to launch MySQL and the mission container on the same network to enable container-to-container communication. A command for launching both containers on the same network is as follows.

$ docker run --name mysql -d -v mysql:/var/lib/mysql --network mission-network mysql:8
$ docker run --name mission-container -d --network mission-network -p 3000:80 mission-service:1

After running these two above commands both containers will start communicating because they are on the same network.

The next step is to figure out how to manage multiple commands and do away with lengthier ones.

We can run containers and create images, but the instructions to do so might get longer. Primarily when there are several environment variables, volumes, or networks. Particularly in multi-container setups, our Docker run instructions are rather lengthy, which adds a lot of work. Also, handling many commands is tedious if there are several services. So, we have docker-compose to address this issue. To solve this problem, we must write the configuration file docker-compose and manage containers using orchestration.

When we use docker-compose up, it picks up the configuration file (docker-compose.yml) we’ve created for our container launch and initializes all the containers according to their specifications. Docker compose down may be used to stop all initialized containers. Therefore, docker-compose is a nice tool for projects that include several containers and projects in general.

How we can write a configuration file to arrange all docker commands is explained in the next blog, “Docker: let’s manage containers via orchestration”, and will also discuss managing the lifecycles of containers, particularly in immense dynamic contexts.

Conclusion

This blog post on containerization covers what docker images are, how to make running instances (containers) from them, and how to attach volumes to those containers so that their data is preserved even when they are stopped. And how container-to-container communication is accomplished. After reading this, we conceptually understand the fundamental commands of Docker, and in the following blog post, “Docker: let’s manage containers via orchestration,” we will learn more in-depth details about creating configuration files (docker-compose), which contain commands to create images, create containers, and other things related to managing container lifecycles.

Want to get in touch?

Please reach out to me on LinkedIn.

--

--