Lately, in the DevOps world, there has been a lot of emphasis on the cloud. And, we pray that you have not been living under a rock all these days; You sure must have heard of the Docker and Docker Container.
What is a Docker?
As per Docker’s own definition: Docker is “an open source project to pack, ship, and run any application as a lightweight container.”
And, what does it mean?
In the real world (and in layman terms) a Docker “allows you to package your application along with all of its dependencies and configurations, making sure that the application can run on any infrastructure with almost no configuration changes on the customer premises”.
For Example, Jenkins delivers their application as Docker containers; and we all know that Jenkins is dependent on Java. So, in a traditional model, if you have to install Jenkins on your server, you need to Install Java and also you need to install Jenkins. Then, make the configuration changes to finally run the application.
In the case of a Jenkins Docker container, a Jenkins application is packed along with a compatible version of Java and also with any other dependencies. This allows customers to just download the Jenkins container and execute them on their own machines. No Java installation and no compatibility issues.
Docker Container Vs Virtual-machine
The next obvious observation would be: “It is also possible to pack an application along with its dependencies as OVA files, then how is a Docker any better?”
To answer that Question, let’s understand the architecture of a VM and that of a Docker.
Image Courtesy: www.docker.com
- Footprint: VMs are inherently heavyweights. They need to run a complete OS to be able to run a packaged application. This is needed because the system calls made by the Apps are made to the underlying Guest OS. The Guest OS sends the system calls to the Host OS, via Hypervisor, and then relays the return value of the call back to the App. In the case of containers, the Docker engine does not need a Host OS. All System calls are intercepted by the Docker Engine and are relayed back to the Host OS. Hence, the Docker containers are extremely lightweight.
- Resources: VMs should be allocated a defined amount of resources, which cannot be shared between multiple VMs. If you share X amount of RAM to a particular VM, then this X amount of RAM would be dedicatedly allocated to the VM. In the case of containers though, they utilize the resources as per the need. If a container is running a very lightweight application, it will utilize just the right amount of RAM
- Automation: It is possible to create Docker containers on the fly by writing a couple of lines of configuration. Hence, a Docker can be easily integrated with your CI/CD or Deployment tools (like Jenkins, etc.). The same cannot be said about the VMs.
- Instantiation: Instantiating a VM instance is a time taking process, sometimes taking tens of minutes. But Docker containers can be started within seconds.
- Collaboration: It is very easy to share your Docker images (and containers) with other users. A Docker provides you with Registries which can store and share your images, publicly or privately. VMs are not that easy to share.
A Docker majorly works as a Docker Engine. A Docker Engine is a client-server application with the following major components:
- A server which is a type of a long-running program called a Daemon process.
- A REST API which specifies interfaces that programs can use to talk to the Daemon and instruct it what to do.
- A command line interface (CLI) client.
Image Courtesy: www.docker.com
What is a Docker’s architecture?
A Docker uses a client-server architecture. The Docker client talks to the Docker Daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and the Daemon can run on the same system, or you can connect a Docker client to a remote Docker Daemon. The Docker client and Daemon communicate using a REST API, over UNIX sockets or a network interface.
- Docker Images: A Docker image is a read-only template with instructions for creating a Docker container. For example, an image might contain an Ubuntu operating system with an Apache web server and your web application installed. You can build or update images from scratch or download and use images created by others. An image may be based on or may extend, one or more of the other images. A Docker image is described in a text file known as a Docker file, which has a simple, well-defined syntax. For more details about images, see the section: How does a Docker image work?
Docker images are the build component of Dockers.
- Docker Container: A Docker container is a runnable instance of a Docker image. You can run, start, stop, move, or delete a container using the Docker API or CLI commands. When you run a container, you can provide configuration metadata such as networking information or environment variables. Each container is an isolated and secure application platform but can be given access to resources running on a different host or container, as well as persistent storages or databases.
For more details on the containers, see the section: How does a container work?
Docker containers are the run components of a Docker.
- Docker Registries: A Docker registry is a library of images. A registry can be public or private and can be on the same server as the Docker Daemon or a Docker client, or on a totally separate server.
For more details about registries, see the section: How does a Docker registry work?
Docker registries are the distribution component of a Docker.
Variants of Docker
There are multiple variants of Dockers that are made available:
- Docker Enterprise Edition (Docker EE): It is designed for enterprise development and IT teams who build, ship, and run business critical applications in production at scale. A Docker EE is integrated, certified, and supported to provide enterprises with the most secure container platform in the industry to modernize all applications.
- Docker Community Edition (Docker CE): It is ideal for developers and small teams looking to get started with a Docker and
- experimenting with container-based apps. A Docker CE is available on many platforms, from desktops to clouds to servers. A Docker CE is available for the MacOS and Windows and provides a native experience to help you focus on a learning Docker.
You can build and share containers and automate the development pipeline, all of it from a single environment.
- Docker Cloud is a platform run by the Docker which allows you to deploy your application using multiple cloud providers such as, Digital Ocean, Packet, SoftLink, or AWS.
Docker Platform Support
As of 18-03-2017, a Docker supports the following platforms:
|Platform||Docker EE||Docker CE|
|Red Hat Enterprise Linux||Yes|
|SUSE Linux Enterprise Server||Yes|
|Microsoft Windows Server 2016||Yes|
|Microsoft Windows 10||Yes|
|Amazon Web Services||Yes||Yes|
In the DevOps/Docker course, we will learn more about the installation of Docker, play with Docker images (including creation, modification, and execution) and also work with containers. We will also try to build a DevOps pipeline where we will try to deploy a Java application onto a Tomcat Docker container.
Enroll for the DevOps Training powered by Acadgild and get Certified.