What is a container?
Secret: there no such object as a "container"! Unlike a Linux file system, directory, user, or process, "container" is not an object, but a technique for employing certain attributes on a standard Linux process (or set of processes) to achieve program isolation and resource management that cannot be realized using any of the standard Linux process management tools.
These attributes provide two important capabilities. First, direct and granular control over resources, such as CPU and memory, available to the process. Second, partitioned sets of process IDs, user IDs, disk mounts, and network interfaces. Think "virtual reality goggles" for processes.
These attributes allow a developer to create a custom Linux environment for an application that is independent of the environment of the host on which it is running.
These are similar attributes to those of VMs, which is why containers are often compared to (and even confused with) VMs.
These capabilities have existed in Linux kernel for many years (BSD Jails, Solaris Zones and the basic UNIX chroot) but languished mostly unused because of the obscure nature of the attributes,
and lack of tools to create and manage them in a unified way.
Several toolsets are available to provide this unified container management system, but Docker has become the most popular.
Some people think Docker is containers, but this is a misunderstanding. Docker is system for container creation and management.
Docker packages the multiple operations necessary for creating and managing containers, abstracting to the simpler process-like metaphor.
- Disk images: Docker provides the capability to create a disk image that is used as the container's root file system. This is the primary feature that gives Docker containers their VM-like appearance. Images are created using a "Dockerfile" a makefile-like specification file.
- Container invocation and runtime management: Docker provides a command line tool to start, stop, and monitor containers. Text labels can be attached as metadata to containers
- Network mapping: Docker allows the developer to create virtual network interfaces inside the container, and map those to interfaces on the host where the container is running.
- Image repository: Docker provides a public library of sorts where developers can store and publish their container images.
Docker Engine: daemon that supervises containers on a host. Interaction is through its REST API or the "docker" command line tool.
docker: command line tool for performing container operations (e.g. start, stop, create, remove, show status)
Dockerfile: definition file to create container images.
The above 3 components relate to a single host. Docker has more components for creating and managing a multi-host environment:
Docker Machine: tool to provision hosts to run Docker Engine. Understands how to provision directly on various cloud service providers.
Docker Compose: a tool for defining and running applications that are made up of multiple containers
Docker Swarm: clustering system to manage containers and sets of containers across multiple hosts.
The developing Docker product line looks an awful lot like the VMWare product line, further reinforcing the notion that a container is a "kind" of VM.