Monday, 15 June 2015

Docker and Container

Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments.

Why Docker?
VM hypervisors, such as Hyper-V, KVM, and Xen, all are "based on emulating virtual hardware. That means they’re fat in terms of system requirements. Containers, however, use shared operating systems. That means they are much more efficient than hypervisors in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This in turn means you can “leave behind the useless 99.9% VM junk, leaving you with a small, neat capsule containing your application.
Docker, however, is built on top of LXC. Like with any container technology, as far as the program is concerned, it has its own file system, storage, CPU, RAM, and so on. The key difference between containers and VMs is that while the hypervisor abstracts an entire device, containers just abstract the operating system kernel.
This, in turn, means that one thing hypervisors can do that containers can’t is to use different operating systems or kernels. With Docker, all containers must use the same operating system and kernel.


Development teams can build their application with all of its dependencies, run it in development and test environments, and then just ship the exact same bundle of application and dependencies to production. 

Original vision for Docker: a way to run your application anywhere without worrying about how it gets there. Generally, zero downtime deployment in this model is done in the blue-green style where you launch the new generation of an application alongside the old generation, and then slowly filter new work to the new generation.
Deploy Stateless Application

Strongly recommend starting with stateless applications, which are normally designed to immediately answer a single self-contained request, and have no need to track information between requests from one or more clients.


Containers vs. VM


Containers not as virtual machines, but as very lightweight wrappers around a single Unix process. Containers are also ephemeral: they may come and go much more readily than a virtual machine. A particular container might exist for months, or it may be created, run a task for a minute, and then be destroyed.  While Virtual machines are by design a stand-in for real hardware that you might throw in a rack and leave there for a few years. 
The new container is so small because it is just a reference to a layered filesystem image and some metadata about the configuration.

It’s best to design a solution where the state can be stored in a centralized location that could be accessed regardless of which host a container runs on.

Main Components:

Docker client
The docker command used to control most of the Docker workflow and talk to remote Docker servers.

Docker server
The docker command run in daemon mode. This turns a Linux system into a Docker server that can have containers deployed, launched, and torn down via a remote client.

Docker images
Docker images consist of one or more filesystem layers and some important metadata that represent all the files required to run a Dockerized application. A single Docker image can be copied to numerous hosts. A container will typically have both a name and a tag. The tag is generally used to identify a particular release of an image.

Every Docker container is based on an image, which provides the basis for everything that you will ever deploy and run with Docker. To launch a container, you must either download a public image or create your own.


Docker container
A Docker container is a Linux container that has been instantiated from a Docker image. A specific container can only exist once; however, you can easily create multiple containers from the same image.


A container is a self-contained execution environment that shares the kernel of the host system and which is (optionally) isolated from other containers in the system.
The major advantages are around efficiency of resources because you don’t need a whole operating system for each isolated function.


When a process is running inside a container, there is only a very little shim that sits inside the kernel rather than potentially calling up into a whole second kernel while bouncing in and out of privileged mode on the processor.


A container as a wrapper around a process that actually runs on the server.



Container Config

When you create a container, it is built from the underlying image, but various command-line arguments can affect the final settings. Settings specified in the Dockerfile are always used as defaults, but you can override many of them at creation time.



Reference:

No comments:

Post a Comment