Installing Kubernetes locally: a practical mini-guide
Installation of Kubernetes: mini-guide dedicated to the local installation of Kubernetes
Installing and launching Kubernetes locally is the best way to see and comprehend all the components of Kubernetes and the way they integrate.
Kubernetes works almost everywhere, with different versions of Linux and thanks to different cloud services (link articolo “Servizi Cloud Kubernetes”). The compatibility of this tool with different worldwide used operating systems represents one of the main strengths of such an instrument.
If you give a look at the official website of Kubernetes you will find a page with the documents that summarize the status of the several distribution combinations of the system as well as a series of introductory guides that can help you realize a correct local installation of Kubernetes.
Having said that, Kubernetes can be distributed autonomously, the only necessity is to have Linux with Docker installed on its own machine or on a virtual machine.
Before getting into detail, let’s review the architecture of Kubernetes because it will be useful during the reading:
- The master is the “control plan” of Kubernetes and it has a series of components in terms of clusters, including an API server that is used by the tools to interact with the cluster itself (for example kubectl).
- Each node has a kubelet service which receives orders from the master and a runtime in the container (included but not limited by a Docker) with which it interacts to manage the container requests. Let’s see it in action and describe further the components while setting them up.
Local installation of Kubernetes
As previously described in the introductory guide, there are just three steps to obtain an autonomous cluster that is active and working. Let’s see them together.
Step one: run etcd
docker run -d \
Kubernets stores reliably the main status and the configuration through etcd. The different master components “watch” this data and act consequently, for example, by launching a new container to maintain a desired number of copies.
Step two: execute the Master
docker run -d \
-v /var/run/docker.sock:/var/run/docker.sock \
/hyperkube kubelet \
This step executes a kubelet container, which in this local deployment in turn executes the cluster-level components that form the main Kubernetes “control plane.”
- Server API: provides the Kubernetes RESTful API to manage cluster configuration, supported by the etcd data store.
- Scheduler: places pods on nodes based on rules (e.g., labels). At this stage, the scheduler is simple but, like most components in Kubernetes, it is configurable.
- Controller Manager: manages all cluster-level functions, including endpoint creation/updating, node discovery, management and monitoring, and pod management.
Step three: run the service proxy
docker run -d \
/hyperkube proxy \
The proxy ( kube-proxy ) runs on each node and provides a simple network proxy and load balancing capabilities. This proxy allows services to be exposed with a stable network address and name.
At this point we should have a locally executable Kubernetes cluster. To make sure that everything is okay, you can use Docker ps to check the running container instances; it should look like this you see in the image below (several columns have been omitted for brevity):
alt tag = Kubernetes installation
To interact with the cluster we need to use the kubectl CLI tool. It can be downloaded pre-built directly from Google (note 0.20.1 is the latest version at the time of writing).
chmod u+x kubetctl
sudo mv kubectl /usr/local/bin/
The fastest way to start a container in the cluster is to use the kubectl run command. So, for example, to start an Nginx container, you can use the command and refer to the container image name.
kubectl run web –image=nginx
Although this is convenient and suitable for experimentation, resources such as pods are generally created using configuration artifacts that may have advantages from the fact that they can be versioned with GIT, thus for example facilitating rollbacks.
If YAML (or JSON) is used, a pod and its desired state are declared; it is then the job of Kubernetes to ensure that this pod always exists according to this specification until it is deleted.
Here is a simple example of configuring a pod for that nginx pod (file nginx-pod.yml).
– name: nginx
This runs the same container image (nginx) as above, but in this case the resource configuration specifies some additional metadata, e.g. a label (key/value pair) that assigns this pod a label of the type “app” of “web”.
kubectl create -f nginx-pod.yml
Kubernetes, in the background, will instruct a node to extract the container image by starting the same container in turn.
At this point you will use kubectl again a few moments later and you should be able to see the nginx pod running.
$ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED
web1 172.17.0.1 nginx nginx 127.0.0.1/127.0.0.1 app=web Running About a minute web1
As requested, an Nginx container is now running in a pod (web1-qtlc1 in this case) and it has also been assigned an IP address. In Kubernetes, each pod is assigned an IP address from an internal network and this means that the pods can communicate with each other, however, in this case we have deployed Kubernetes with Docker using the host network (-net=host) and these IP addresses will be locally accessible.