

Let's take this pod and actually run it on a single-node cluster using Kubernetes. A simple pod configuration for a Redis master container might look like this: In this example - from the Kubernetes example project - you can see information pertaining to the service's source image, but also some Kuberentes-specific information, like the labels used for service discovery. PodsĪ pod file is a json representation of the task you want to run. Kubernetes provides a RESTful API in order to manipulate the three main resources: pods, services, and replicationControllers. Interacting with Kubernetes is quite simple, even outside of the context of Google Compute Engine. Let's assume you've pulled down and started the Kubernetes services, and you're ready to build your first pod. The Kubernetes control plane is comprised of many components, but they all run on the single Kubernetes master node. Aside from running Docker, each node runs the Kubelet service - which is an agent that works with the container manifest - and a proxy service. The master then spawns containers to handle the workload you've asked for. Like most other cluster management solutions, Kubernetes works by creating a master, which exposes the Kubernetes API, allowing you to request certain tasks to be completed. Pulling down the Kubernetes binaries will give you all the services necessary to get your Kubernetes configuration up and running. To manage resources within Kubernetes, you will interact with the Kubernetes API. Service: an endpoint that provides load balancing across a replicated group of pods.Label: an arbitrary key/value pair that the Replication Controller uses for service discovery.Replication Controller: ensures that the requested number of pods are running on minions at all times.This is the basic unit of manipulation in Kubernetes. Pod: an application (or part of an application) that runs on a minion.Minion: a slave that runs tasks as delegated by the user and Kubernetes master.Master: the managing machine, which oversees one or more minions.Kubernetes also has a specific collection of terms, some of which are overloaded in the container/cloud space. While Kubernetes does work as a cohesive package, there are several components at play, each with a specific role. Despite heavy development, Kubernetes is still in beta, so you may discover some bugs.

Written in Go, the Kubernetes project has close to 2500 commits from over 100 different contributors. Kubernetes also boasts heavy involvement from the developer community. Kubernetes offers a few distinct advantages, first and foremost being that it packages all necessary tools - orchestration, service discovery, load balancing - together in one nice package for you. While Kubernetes was designed to make working with containers on Google Compute Engine easier, the bits are available for anyone to use, and you need not be running GCE. Theoretically speaking, you could mix and match these different components and come up with a roll-your-own solution. Fleet, Geard, and Marathon are designed to schedule and orchestrate jobs on the host Fleet also functions as a cluster manager, along with Apache Mesos. Several other services are available to make working with Docker more efficient and streamlined. In order to manage a Docker workload on a distributed cluster, you need to bring some other players into the mix.

But we all know that only takes us so far. Docker alone works best when you're able to manually manipulate and configure the host. But these interactions are local Docker alone can't manage your containers across multiple nodes, or schedule and manage tasks to be completed across your cluster. and What it Isn'tĭocker and its ecosystem are great for managing images, and running containers in a specific host. Kubernetes is a solution for overseeing and managing multiple containers at scale, rather than just working with Docker on a manually-configured host. The name Kubernetes originates from Greek, meaning "helmsman" or "pilot," and that's the role it will fill in your Docker workflow. With it, you can schedule and deploy any number of container replicas onto a node cluster and Kubernetes will take care of making decisions like which containers go on which servers for you. What do you do when you want Docker containers managed across vast fleets of servers and infrastructure? You use Docker orchestration tools like Kubernetes.ĭeveloped by Google, Kubernetes is essentially a cluster manager for Docker.
