If you have been involved in software development for a while, you might have heard of Docker. In this article, let’s get an overview of it.
Docker is a software platform that makes it easier to build, run, manage, and distribute applications.
In the past, if you wanted to deploy multiple applications and have separate environments for them so that they do not affect each other in any way i.e. you wanted to isolate them, you would have used Virtual Machines. Virtual Machines do hardware-level virtualization i.e. a fixed amount of resources are assigned to each VM and an application runs inside it.
But VMs are bulky. Each one of them runs a whole operating system and then on top of it the application you built. VMs use a lot of system resources that could have been better utilized for your applications.
Here is where Docker shines. It allows you to use the system resources optimally without compromising on the isolation of your applications.
So, Docker is a software platform. You first install it on your machine. After that, you write a Dockerfile for your application.
A Dockerfile is all the series of commands that you would have run on a server to build and run your app like copying the source code, downloading dependencies, compiling your code, setting up environment variables, and then finally running it.
You use this Dockerfile to create a Docker image. So now, your Docker Image is a single package of your source code, its dependencies, and the environment and commands you specified to run your app. All these details are stored in a Docker image.
Now, to run your application you have to create an instance of your Docker image. That instance is a process in OS terms and in Docker terms, it’s a container.
So, How the isolation of these processes i.e. containers is happening? To isolate containers, each container should have its own file system, IP address, process ID, and other things.
This all can be done using the namespaces feature provided by the operating system. For example, the PID namespace is used to isolate the processes. Each container will be in a different PID namespace. Each container will have access to a different net namespace. Net namespaces are responsible for managing network interfaces. The amount of resources used by a container is controlled by using another feature of the operating system which is Control Groups. Control Groups allow putting limits on hardware resources accessible by containers.
So, using the namespaces feature and control groups, Docker achieves OS-level virtualization. Each container is made to believe that it has access to the whole OS but in reality, multiple containers are using the same OS instance.
So, now you know how to build and run a Docker container on your local machine. But how to run it on a server? What you need to do is push your docker image to a Docker repository. Just like your source code sits in a git repository, your docker image will sit in a Docker repository. Docker website provides DockerHub which is a freemium docker registry. Docker registries allow you to create public and private repositories. DockerHub provides unlimited public repositories and one private repository for free accounts.
You can push i.e. upload your Docker image to DockerHub and then from your server, you can pull that i.e. download that image from DockerHub and simply run it. It will run the same as it was running on your machine. No need to install anything else. You have already done everything when you specified the commands in the Dockerfile.
Packaging your applications as Docker images makes building, running, and distributing applications much easier. Plus Dockerfile acts as a single source of truth and tells you what all things your application is built of.
So, that was an overview of Docker. We covered how Docker works and why to use it. I hope you found it useful.