“So you have this [insert new technology] thing running on your machine? Great! Could you pass it to me as well, so I can play with it?” ~ coworker
We all know situations like these, and we usually start grabbing our trusty external HDD to copy the huge virtual image from our own machine to the hard disk to pass it on. It is not uncommon for these virtual machines to be at least 30GB in size, which makes handling them unwieldy.
Surely, there must be a nicer way of handling this?!
Well, there actually is, and it is eliminating the entire OS from the equation! It is called Docker, and promises to make your like a whole lot easier.
What is Docker?
Docker itself says the following about their product:
Docker allows you to package an application with all of its dependencies into a standardized unit for software development.
How do they do it, you might ask?
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it runs in.
What they mean is that you are able to get everything needed to run your application in one convenient package, which also happens to be portable enough to be deployed on a variety of hardware and software as long as Docker runs on it.
Containers, containers everywhere!
The magic word here is containers, an operating-system-level virtualization environment for running multiple isolated Linux systems (containers) on a single Linux control host, avoiding the overhead of starting and maintaining virtual machines.
The Linux kernel (since version 2.6.24) provides the cgroups functionality that allows limitation and prioritization of resources (CPU, memory, block I/O, network, etc.) without the need for starting any virtual machines, and namespace isolation functionality that allows complete isolation of an applications’ view of the operating environment, including process trees, networking, user IDs and mounted file systems.
Docker makes extensive use of containers and provides an additional layer of abstraction and automation to the creation and management of containers. This is also its power, and the reason why Docker is often referred to as disruptive technology.
Docker is available for a host of operating systems, including all major Linux distributions like Ubuntu, Debian, CentOS, Fedora, Red Hat Enterprise Linux, SUSE and Oracle Enterprise Linux. It also is available for Mac OS X and Windows through the tool Boot2Docker or Kitematic (Mac only). Boot2Docker is a tool which creates a Linux virtual machine on the host system (Windows or Mac OS X) to run Docker on a Linux operating system. It however has some disadvantages over running a full-fledged Docker installation on Linux, which mainly have to do with accessing files on the host from the Docker container and vice-versa.
Enough with the techno-chatter, onwards with images.
How is Docker different from virtual machines?
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient.
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.
By packaging up the application with its configurations and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configurations into a different environment.
This post is part of a series on virtualization with Docker.
- What is Docker?
- Setting up Docker
- Creating an Oracle SOA Suite 12c Docker image
- Resizing the disk on a VirtualBox OEL7 image
- Jenkins: Setting up a Shared Library for your pipelines
- Jenkins: Using Gradle to build your Shared Library
- Jenkins: Creating a custom pipeline step in your library
- Jenkins: Running a declarative pipeline from your Shared Library