Docker: let’s begin from the source! 🐳

Kishan Kesari Gupta
TechVerito
Published in
7 min readFeb 10, 2023

--

In this piece, we’ll examine what makes Docker superior to other virtualization tools, how they affect global companies, and why moving your application to a container system can be the finest thing to do.

Imagine a scenario where developers from different regions are working in isolation on a single project using a separate operating system (Linux, Windows, Mac). Developers use multiple operating systems to create a piece of software and carry out operations following their specific operating systems, such as installing libraries and files. And these situations frequently lead to various conflicts and issues in an organization throughout the entire software development life cycle. However, containerization solutions like Docker can solve this problem.

In particular, Docker is a containerization medium that makes it simple to build, share, deploy, and run apps using containers. It primarily concentrates on the developer’s wrapping apps with all their essential libraries and further dependencies in a container. Along with containers, Docker also consists of several other crucial components, like Docker Files, Docker Images, Docker Registries, etc. Thanks to Docker, programmers may write code or make applications without worrying about the different operating system environments.

About Docker

Docker is a platform with lightweight virtualized environments that enables speedier app development, testing, and deployment. Docker packs your apps into standardized units called containers, which include runtime libraries, system tools, and code required for the software to work. With Docker, you can quickly develop and deploy apps into any environment while being confident that your code will work.

It serves primarily for creating distributed applications that operate effectively in various environments. By keeping their programs independent of operating systems, developers can avoid worrying about compatibility problems. Applications can be developed, deployed, maintained, and used more easily if packed into isolated containers.

As a result of Docker’s usage of virtualization to create containers for storing apps, the concept may appear similar to that of virtual machines. Therefore, virtual machines (VMs) and containers are isolated virtual environments used for software development. The most significant dissimilarity is that virtual machines cannot compare to Docker containers in terms of performance, resource efficiency, and weight.

Let’s see how virtual machines and Docker containers have evolved in the below phase.

Evolution of Virtualization

In the bad old days, each server could only support a single application. Several apps couldn’t be run safely and securely on the same server in the open-systems world of Windows and Linux.

When new applications are necessary, companies purchase a new server. The engineers made assumptions when deciding on the model and size of the server purchased because no one knew the new application’s performance requirements. As a result, did the only thing it could: it spent a lot of money on large, quick servers that wasted the industry’s capital.

After a few days, VMware presented a virtual machine (VM). And almost immediately, the world began to improve dramatically. Finally, we had the technology necessary to run numerous enterprise applications on a single server securely and safely.

VMs are fantastic, but they are not faultless. Each virtual machine (VM) has its unique operating system (OS). Every operating system uses a CPU, RAM, and other resources. Every OS requires monitoring and patching. All of this results in time and money being squandered. There are still more issues with the VM strategy. VMs take a long time to start up, and shifting workloads across cloud platforms and hypervisors is highly challenging.

To solve the shortcomings of the VM strategy, Google began to use container technologies. The container and virtual machine are similar in particular ways. The fact that containers don’t need their comprehensive operating system is a significant distinction. In reality, all containers on a single host share the OS of the host. It releases system resources like the CPU, RAM, and storage. It also eases the burden of other maintenance tasks and OS patching. The final result is that it saves time, resources, and money.

Modern containers are the outcome of a tremendous amount of work done by an extensive span of people over a while in the Linux community. The Docker community effectively democratized and made containers available to the general public, allowing for the enormous rise of containers in recent years.

Virtual Machine Vs Docker Containers

Virtual Machine and docker containers offer isolated working environments and can be used to distribute and bundle software. But they differ from each other in terms of usage. Let’s demonstrate how they vary from one another.

  • Virtual Machine
Working of Virtual Machines

When you need to run several applications on servers or have several operating systems to handle, virtual machines (VMs) are a better option for running apps that demand all of the operating system’s capabilities and functionality. VMs will continue to suit your use case well if you currently have a monolithic application that you don’t need to convert into microservices or don’t plan to.

A virtual machine has some advantages, such as distinct environments, environment-specific configurations, and the ability to share environment configurations reliably. But additionally, it has drawbacks, including redundant duplication that wastes space, speed issues, lengthy boot times, and the possibility but potential difficulty of replication on another machine or server.

  • Docker Containers
Working of Docker Containers

When boosting the number of apps or services operating on a small number of servers is your top goal and you want the most mobility, containers are a better option. Containers are the best option if you’re building a new project and want to employ a microservices architecture for scalability and portability. Developing cloud-native applications based on a microservices architecture is where containers really shine.

There are several benefits to using Docker containers, including their ease of distribution, little effect on operating systems, high speed, and minimum disk space utilization. Capture applications and environments as opposed to the whole machine.

Architecture of Docker

Working of Docker Architecture

The Docker architecture consists of the container registry, a Docker Daemon running on a host, and a Docker client. The Docker client-server architecture enables client-server communication with the Docker Daemon running on the host via a combination of REST APIs, Socket IO, and TCP. We can build docker images anywhere, but whoever builds, it must push them to the registry so that it is used to launch a docker container on any system that has the docker daemon running. Perform the docker pull command that will download the image from the Docker Hub without creating it again. Finally, if you need to run the image, execute the run command that will construct the container.

In the next blog “Docker: let’s containerize!”, we’ll go into more depth about images and how to build them, as well as containers and how to run them.

Installation Process

Docker can be set up in several ways. It can be on your machine, on-premises, and in the cloud. It contains scripted installs and manual installs, etc. available for Mac, Linux, or Windows.

To get Docker for Mac, Windows, or Linux, click this link “https://docs.docker.com/get-docker/” and follow the on-screen instructions.

After the installation completes, you might need to start Docker Desktop manually. It takes a moment to get started. Also, by launching a terminal or shell prompt, you can execute various Docker commands as follows.

$ docker version

Client: Docker Engine - Community
Cloud integration: v1.0.29
Version: 20.10.21
API version: 1.41
Go version: go1.18.7
Git commit: dfrde5j
Built: Tue Oct 25 18:01:10 2022
OS/Arch: drawin/amd64
Context: default
Experimental: true

As the installation part is done, let’s go into more depth about images and how to build them, as well as containers and how to run them in the blog “Docker: let’s containerize!”.

Conclusion

This blog is about the docker source that explains how docker works, where it comes from, when we should use it, and certain benefits it provides. As a result of reading this, we become conceptually familiar with docker, and in the blog post “Docker: Let’s Containerize!”, we will learn more in-depth information about images and how to create containers, and how to operate them.

Want to get in touch?

Please reach out to me on LinkedIn.

--

--