Product and service reviews are conducted independently by our editorial team, but we sometimes make money when you click on links. Learn more.
 

How To Use Docker To Deploy Applications And Services

How To Use Docker To Deploy Applications And Services
By

Docker is an increasingly popular platform for streamlining cloud deployments. Here is an example deployment discussed in detail.

Supporting production applications has always had its challenges, but the pace of DevOps has added tighter deployment cycles to the list. Docker, an alternative to hypervisor-based virtualization, may be a key to keeping up with the demands of DevOps without sacrificing control and quality of your deployment processes.

Intro to Lightweight Virtualization Through Docker Containers

Docker is a lightweight container virtualization technology that does not require a hypervisor. Instead, programs are executed in a container isolated from other processes by operating system controls. The Docker container application works closely with the Docker Hub, a repository of Docker images.

Docker is particularly useful in DevOps environments, allowing developers to create and test applications based on multifaceted software stacks and libraries. Once the app configuration is complete and tested, developers can create a Docker image of the application and related dependencies. System administrators can then deploy the Docker images without having to wrestle with make files and potential compiler errors.

Another advantage of Docker is that it runs on a wide range of platforms, from laptops to servers. Although Docker was originally designed for Linux platforms, the Windows Docker Client allows Docker images to run on Windows 7.1 and 8 operating systems.

Docker logically isolates applications from computing infrastructure. Unlike other virtualizaiton technologies, however, there is no need for a hypervisor. A Docker deamon runs on a server and performs core tasks, such as building and running Docker images. Users work with the Docker daemon through the client application, a command line application called "docker."

The Docker platform includes clients, hosts running containers, and registries of image specifications.The Docker platform includes clients, hosts running containers, and registries of image specifications.Docker Puzzle Pieces

Three important elements of Docker are images, registries and containers.

Docker images are templates that are read to create containers. Images are collected in repositories called Docker registries. The primary public Docker repository is the Docker Hub. Finally, a Docker container is a collection of all components needed to run an application.

The Docker Hub is a repository of shared Docker images that saves you time by providing access to existing images for a wide range of applications.The Docker Hub is a repository of shared Docker images that saves you time by providing access to existing images for a wide range of applications.Docker uses the concept of layers to streamline management. All images are built on a base image, such as an image for a Linux distribution, which are usually retrieved from the Docker Hub. Docker commands are applied to the base image to add components, define configuration settings and add files. These commands are typically stored in a Docker file, but they can be executed manually as well.

Docker uses several operating system features to implement lightweight virtualization: namespaces, control groups, and union file systems.

A fundamental component of container isolation is the use of namespaces. When a new container is created, Docker creates namespaces for the container to prevent any interactions between the application running inside the new container and the host environment. The use of control groups is closely related to Docker namespaces.

Unique control groups are also made when a new container is created. Control groups, also known as cgroups, act as a resource control system for a particular container. They make sure resources, such as memory and disk I/O, are properly allocated to the container. This includes acting as a failsafe for containers that begin to soak too many resources. Control groups can also provide metrics on resource use via a pseudo-filesystem. Control groups are key to making sure containers can properly run on a multi-tenant system. 

Docker uses union file systems to create file layers. The Docker union file system is a conglomeration of all read-write and read-only layers. Docker supports multiple union file systems, including AUFS, vfs, btrfs and DeviceMapper.

Docker Example

Lets assume you are working with a group of data scientists who need to perform a number of ad hoc analysis projects. They like to use a core set of tools that include Python and related data analysis packages along with the statistics package R.Depending on the problem at hand, they like to install other tools as needed. Rather than try to keep a single Linux environment that works well for all cases, the data scientists have decided to build a base Docker image with core tools. They will deploy and add tools as needed for each of their specific projects.

The systems administrator builds their basic image starting with the Docker search command to find an Ubuntu image in the Docker repository. After downloading the latest version of Ubuntu, the system administrator uses the docker run command to execute the Ubuntu image. The attach command is used to connect to the virtual instance and run commands.

From there, the system administrator downloads and installs the Python and R tools needed using the same commands someone would use on any Ubuntu instance. The last step in the process is issuing the Docker commit command and specifying the name of the image to save as a Docker image file. This will save a specification for the Ubuntu operating system plus the packages installed.

Docker in Practice

As for Docker best practices, there are a few things to keep in mind. First, try to keep container configuration simple; this prevents additional hassle when destroying a container to make a replacement. Keeping containers simple includes trimming any non-essential software packages. It is also important to keep file layers to a minimum. This all feeds into making sure each container is simple and easy to rebuild or replace.

In terms of usability, running a single process per container can increase your horizontal scalability. Instead, separate a multi-process application into multiple containers. Make sure to clearly tag each Docker container. This can be done by passing '-t' to 'docker build,' after which the container image will be clearly tagged.

Finally, take advantage of the Docker Trusted Build tool. Trusted Build is a repository of public Dockerfiles, as well as Github project links. This allows you to see how certain Dockerfiles were created and used.

DevOps has introduced new demands on system administrators with faster and more fequent deployments. Developers are taking advantage of a wide range of software tools and platforms. Maintaining all the moving parts of a complex software stack adds yet another dimension of complexity to application development. Docker cannot make these tasks go away, but it can help mitigate the risks of misconfiguration, bad builds and some of the other issues too often faced by system administrators.

RELATED: