Deploy Kubernetes on Digital Ocean: mini practical guide

Deploy Kubernetes on Digital Ocean: the helpful mini guide

DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service (link to Kubernetes Cloud Services article) that enables deployment of Kubernetes clusters without the complexities of control plane and infrastructure management. The clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancer and block storage volumes.

Deploy Kubernetes su Digital Ocean step by step

You can create a DigitalOcean Kubernetes cluster at any time from the DigitalOcean control panel by opening the Create menu in the upper right corner.

Il menu di creazione

Tag alt =  Deploy Kubernetes

On the Create menu, click Kubernetes to go to the Create a cluster page. On this page you will choose a version of Kubernetes, a data center region, and a cluster capacity for your cluster.

Select a version of Kubernetes

The latest version of the patch for the three most recent secondary versions of Kubernetes is available for creating new clusters. The latest stable version is selected by default.

Choose a region of the datacenter

Choose the region for your cluster. The control plane and worker nodes of your cluster will be in the same region.

If you plan to use block storage volumes for permanent data storage, choose a region that supports such volumes. If you add a Load Balancer to your deployment, it will automatically be placed in the same cluster region.

VPC Network

In the VPC Network section, choose a VPC network for the cluster. You can choose one that you have created or use the default network for the data center area.

VPC creates an additional network interface that can only be accessed by other resources within the same VPC network. All this prevents traffic from being routed outside the data center over the public Internet.

Choose cluster capacity

To create a cluster, you must add a node pool with at least one worker node. Specify the following fields for the node pool:

  • Name of node pool. Choose a name for the node pool after you create it. You should know that the nodes within this pool inherit this naming scheme when they are created. Not only that, if you rename the node pool later, the nodes will inherit the new naming scheme only when they are recreated (when you recycle the nodes or resize the node pool).
  • Machine type (droplet). You will have several plans to choose from: Basic (Standard CPU, Intel Premium CPU, or AMD Premium CPU), General Purpose, or CPU-optimized Droplet.
  • Node plan. Choose the specific plan you want for your worker nodes. 
  • To take advantage of different resource capabilities, you can create additional node pools with the “Add additional node pools” button and assign pods to node pools with the appropriate scheduling constraints.
  • Number of nodes. Choose how many nodes to include in the node pool. By default three worker nodes are already selected precisely because this is the minimum number to “guarantee high availability.”

At the bottom of this section, you will see the monthly charge for your cluster based on the resources you have chosen. When you create the cluster, billing is done for each type of resource (e.g., worker node, block storage, load balancer). 

Add tags

Clusters automatically have three tags :

  • k8s
  • The specific cluster ID, such as k8s:EXAMPLEc-3515-4a0c-91a3-2452eEXAMPLE
  • The type of resource, (e.g., k8s:worker).

You can also add custom tags to a cluster and its node pools on the Overview and Nodes pages. Any custom tags added to nodes in a node pool (e.g., from the Droplet page), are removed to maintain consistency between the node pool and its worker nodes.

Choose a name

By default, cluster names begin with k8s, followed by the Kubernetes version, data center region, and cluster ID. You can customize the cluster name, which will also be used in the tag.

Select the project

The new cluster belongs to your default project. You can assign the cluster to a different project.

You can also edit the project after creating the cluster. Go to the Kubernetes page in the control panel . From the cluster’s More menu, select Move To in and select the project to which you want to move the cluster.

Associated resources, such as load balancers and storage volumes, also move when you move the cluster to a different project.

Create the cluster

After entering the other settings, creating the cluster by clicking the Create cluster button is super easy. Cluster creation can take several minutes to complete.

Once the cluster has been provisioned, you can use kubectl, the official Kubernetes command-line client, specifically to manage the cluster.

In conclusion

The setup of Kubernetes on Digital Ocean is very simple and is done through the graphical user interface of the Digital Ocean administrative panel. We encourage you to read the official Kubernetes and Digital Ocean documentation for further details.

Kubernetes Azure: how to run Kubernetes on Azure

It is possible to create a Kubernetes cluster either through the Azure Web interface or via command line.

Kubernetes Azure: everything you need to know

Let’s see how to perform Kubernetes installation on MS Azure via command line.

Preparing the shell is the first step!

Two options are available: one is to use Azure’s interactive shell, the other is to install Azure’s command-line tools locally. Instructions for each are given below.

1 – Azure Interactive Shell

The Azure portal contains an interactive shell that you can use to communicate with your Kubernetes cluster. To access this shell, go to portal.azure.com log in and click the button at the bottom.

../../_images/cli_start.png

Tag alt = Kubernetes Azure

2 – Command line tools locally

You can access the Azure command line interface through a package that you can install locally.

To do this, first follow the installation instructions in the Azure documentation. Then run the following command to connect your local CLI with your account:

az login

You will need to open a browser and follow the instructions in your terminal to access.

Activate subscription

Azure uses the concept of subscriptions (subscriptions) to manage costs. You can get a list of subscriptions to which your account has access by running:

az account list –refresh –output table

Choose the subscription you want to use to create the cluster and set it as the default. If you have only one subscription, you can ignore this step.

az account set –subscription <SUBSCRIPTION-NAME>

Create a resource group

Azure uses the concept of resource groups to group related resources. We need to create a resource group at a given location in the data center. We will create compute resources within this resource group.

az group create \

   –name=<RESOURCE-GROUP-NAME> \

   –location=centralus \

   –output table

Set a name for the cluster

In the following steps we will run commands that ask you to enter a cluster name. We recommend that you use something descriptive and short. We will refer to this as <CLUSTER-NAME> for the rest of this section.

The next step will create some files on your filesystem, so first create a folder where these files will go. We recommend that you give them the same name as your cluster:

mkdir <CLUSTER-NAME>

cd <CLUSTER-NAME>

Create an SSH key

ssh-keygen -f ssh-key-<CLUSTER-NAME>

You will be asked to add a password, which you can leave blank if you wish. At this point a public key named ssh-key-<CLUSTER-NAME>.pub and a private key named ssh-key-<CLUSTER-NAME> will be created. Make sure they both go into the folder we created earlier and keep them both safe!

Create a virtual network and sub-network

Kubernetes does not have a controller by default that enforces network resources and policies. Networkpolicy resources are important because they define how Kubernetes pods can communicate securely with each other and with the outside world, e.g., the Internet.

To enable it in Azure, we must first create a virtual network with Azure’s network policies enabled.

az network vnet create \

   –resource-group <RESOURCE-GROUP-NAME> \

   –name <VNET-NAME> \

   –address-prefixes 10.0.0.0/8 \

   –subnet-name <SUBNET-NAME> \

   –subnet-prefix 10.240.0.0/16

We will now retrieve the application IDs of the virtual network and subnet we just created and save them in the bash variables.

VNET_ID=$(az network vnet show \

   –resource-group <RESOURCE-GROUP-NAME> \

   –name <VNET-NAME> \

   –query id \

   –output tsv)

SUBNET_ID=$(az network vnet subnet show \

   –resource-group <RESOURCE-GROUP-NAME> \

   –vnet-name <VNET-NAME> \

   –name <SUBNET-NAME> \

   –query id \

   –output tsv)

We will create an Azure Active Directory (Azure AD) service entity to use with the cluster and assign the collaborator role to use with the virtual network. Make sure that SERVICE-PRINCIPAL-NAME is something recognizable, for example, binderhub-sp.

SP_PASSWD=$(az ad sp create-for-rbac \

   –name <SERVICE-PRINCIPAL-NAME> \

   –role Contributor \

   –scopes $VNET_ID \

   –query password \

   –output tsv)

SP_ID=$(az ad sp show \

   –id http://<SERVICE-PRINCIPAL-NAME> \

   –query appId \

   –output tsv)

Create the Kubernetes cluster

At this point, you might be thinking about provisioning your Kubernetes cluster. The following command will create a Kubernetes cluster (link to the What is Kubernetes article) within the resource group we created earlier.

az aks create \

   –name <CLUSTER-NAME> \

   –resource-group <RESOURCE-GROUP-NAME> \

   –ssh-key-value ssh-key-<CLUSTER-NAME>.pub \

   –node-count 3 \

   –node-vm-size Standard_D2s_v3 \

   –service-principal $SP_ID \

   –client-secret $SP_PASSWD \

   –dns-service-ip 10.0.0.10 \

   –docker-bridge-address 172.17.0.1/16 \

   –network-plugin azure \

   –network-policy azure \

   –service-cidr 10.0.0.0/16 \

   –vnet-subnet-id $SUBNET_ID \

   –output table

This should take a few minutes and provide you with a working Kubernetes cluster!

Install kubectl

If you are using the Azure CLI locally, install kubectl, a tool for accessing the Kubernetes API from the command line:

az aks install-cli

Note: kubectl is already installed in Azure Cloud Shell.

Get credentials for kubectl

az aks get-credentials \

   –name <CLUSTER-NAME> \

   –resource-group <RESOURCE-GROUP-NAME> \

   –output table

This command will automatically update the Kubernetes configuration file.

Verify that the cluster is working

kubectl get node

The response should list three running nodes and their Kubernetes versions! Each node should have the status of Ready. Keep in mind that this may take a few moments.

In conclusion

We have seen how to create a Kubernetes cluster in a few simple steps. We always advise you to refer to the official Azure documentation for further details.

How to do Kubernetes setup

Let’s see how to do Kubernetes setup on a server cluster. The example is applied on a machine cluster running CentOS7 but can be replicated on any Linux-based machine.

Kubernetes Set Up

Let’s look together at all the steps related to Kuberntes set up(link to the art what is Kubernetes) in as much detail as possible.

Prerequisites

We need:

  • Multiple servers running CentOS7 (1 master node and 1+ worker nodes)
  • One user account on each machine with administrative privileges
  • Docker(link to the What is Docker art) installed on each machine

Configuring the Kubernetes repository

First, we must always consider that Kubernetes packages are not available through the official CentOS 7 repositories. This step therefore must be executed on the master node and on each worker node.

It is therefore the following command that must be run to retrieve the repository from which to subsequently retrieve the installer for Kubernetes:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

Installation of kubelet, kubeadm, and kubectl

These 3 basic packages are required to be able to use Kubernetes. Install the following packages on each node doing so as you see in the example:

sudo yum install -y kubelet kubeadm kubectl

systemctl enable kubelet

systemctl start kubelet

Before deploying a cluster, be sure to set up hostnames by configuring the firewall and kernel settings.

Set the hostname of a node

To give a unique host name to each of your nodes, use this command:

sudo hostnamectl set-hostname master-node

or

sudo hostnamectl set-hostname worker-node1

In this example, the master node is now named master-node, while a worker node is named worker-node1.

It now creates a DNS record to resolve the hostname for all nodes:

192.168.1.10 master.example.com master-node

192.168.1.20 node1. example.com node1 worker-node

Configure the firewall

Nodes, containers and pods must be able to communicate across the cluster to perform their functions. On CentOS by default Firewalld is installed. Add the following ports by entering the commands listed.

On the Master Node therefore run:

sudo firewall-cmd –permanent –add-port=6443/tcp

sudo firewall-cmd –permanent –add-port=2379-2380/tcp

sudo firewall-cmd –permanent –add-port=10250/tcp

sudo firewall-cmd –permanent –add-port=10251/tcp

sudo firewall-cmd –permanent –add-port=10252/tcp

sudo firewall-cmd –permanent –add-port=10255/tcp

sudo firewall-cmd –reload

Having arrived at this point run these commands on each worker node:

sudo firewall-cmd –permanent –add-port=10251/tcp

sudo firewall-cmd –permanent –add-port=10255/tcp

firewall-cmd –reload

Update Iptables Settings

Set the value of net.bridge.bridge-nf-call-iptables to “1” in the sysctl configuration file. This ensures that packets are properly processed by the IP tables when filtering and port forwarding.

cat <<EOF > /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl –system

Disable SELinux

Containers must access the host filesystem. SELinux must be set to permissive mode, which disables its security features.

Use the following commands to disable SELinux

sudo setenforce 0

sudo sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config

Disable SWAP

Finally, we must disable SWAP to allow kubelet to function properly:

sudo sed -i ‘/swap/d’ /etc/fstab

sudo swapoff -a

Create a cluster with kubeadm

Initialize the cluster with the following command:

sudo kubeadm init –pod-network-cidr=10.244.0.0/16

The process may take several minutes to complete depending on network speed. Upon completion of this command, a kubeadm join message is displayed. Make a note of the record and use it to join the worker nodes to the cluster at a later stage.

Manage the cluster as a standard user

To use the cluster you must be able to log in as a standard user. Run the following set of commands:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Set Pod Network

The pod network allows nodes within the cluster to communicate with each other. Several Kubernetes network options are available. Use the following command to install the flannel pod network add-on:

sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

If you decide to use flannel, change the firewall rules to allow traffic on flannel’s default port 8285.

Verify kluster status

Verify the status of the nodes by running the following command on the master server:

sudo kubectl get nodes

Once a pod network is installed, you can confirm that it is working by checking that the CoreDNS pod is running by typing:

sudo kubectl get pods –all-namespaces

Add a worker node to the cluster

You can use the kubeadm join command on each worker node to connect it to the cluster.

kubeadm join –discovery-token cfgrty.1234567890jyrfgd –discovery-token-ca-cert-hash sha256:1234..cdef 1.2.3.4:6443

In conclusion 

You have successfully installed Kubernetes on CentOS and can now manage clusters on multiple servers.

This Kubernetes tutorial provides a good starting point for exploring the many options this platform has to offer. Use Kubernetes to autoscale your containers so you can spend less time micro-managing each one!

Replace the codes with those on your main server. Repeat this action for each worker node on your cluster.

Installing Kubernetes locally: a practical mini-guide

Installation of Kubernetes: mini-guide dedicated to the local installation of Kubernetes

Installing and launching Kubernetes locally is the best way to see and comprehend all the components of Kubernetes and the way they integrate.

Kubernetes works almost everywhere, with different versions of Linux and thanks to different cloud services (link articolo “Servizi Cloud Kubernetes”). The compatibility of this tool with different worldwide used operating systems represents one of the main strengths of such an instrument. 

If you give a look at the official website of Kubernetes you will find a page with the documents that summarize the status of the several distribution combinations of the system as well as a series of introductory guides that can help you realize a correct local installation of Kubernetes. 

Having said that, Kubernetes can be distributed autonomously, the only necessity is to have Linux with Docker installed on its own machine or on a virtual machine. 

Before getting into detail, let’s review the architecture of Kubernetes because it will be useful during the reading: 

  • The master is the “control plan” of Kubernetes and it has a series of components in terms of clusters, including an API server that is used by the tools to interact with the cluster itself (for example kubectl).
  • Each node has a kubelet service which receives orders from the master and a runtime in the container (included but not limited by a Docker) with which it interacts to manage the container requests. Let’s see it in action and describe further the components while setting them up. 

Local installation of Kubernetes 

As previously described in the introductory guide, there are just three steps to obtain an autonomous cluster that is active and working. Let’s see them together. 

Step one: run etcd 

docker run -d \

    –net=host \

    gcr.io/google_containers/etcd:2.0.9 \

    /usr/local/bin/etcd \

        –addr=127.0.0.1:4001 \

        –bind-addr=0.0.0.0:4001 \

        –data-dir=/var/etcd/data

Kubernets stores reliably the main status and the configuration through etcd. The different master components “watch” this data and act consequently, for example, by launching a new container to maintain a desired number of copies. 

Step two: execute the Master 

docker run -d \

    –net=host \

    -v /var/run/docker.sock:/var/run/docker.sock \

    jetstack/hyperkube:v0.20.1 \

    /hyperkube kubelet \

        –api_servers=http://localhost:8080 \

        –v=2 \

        –address=0.0.0.0 \

        –enable_server \

        –hostname_override=127.0.0.1 \

        –config=/etc/kubernetes/manifests

This step executes a kubelet container, which in this local deployment in turn executes the cluster-level components that form the main Kubernetes “control plane.”

  • Server API: provides the Kubernetes RESTful API to manage cluster configuration, supported by the etcd data store.
  • Scheduler: places pods on nodes based on rules (e.g., labels). At this stage, the scheduler is simple but, like most components in Kubernetes, it is configurable.
  • Controller Manager: manages all cluster-level functions, including endpoint creation/updating, node discovery, management and monitoring, and pod management.

Step three: run the service proxy

docker run -d \

   –net=host \

   –privileged \

   jetstack/hyperkube:v0.20.1 \

   /hyperkube proxy \

        –master=http://127.0.0.1:8080 \

        –v=2

The proxy ( kube-proxy ) runs on each node and provides a simple network proxy and load balancing capabilities. This proxy allows services to be exposed with a stable network address and name.

At this point we should have a locally executable Kubernetes cluster. To make sure that everything is okay, you can use Docker ps to check the running container instances; it should look like this you see in the image below (several columns have been omitted for brevity):

IMAGECOMMAND
jetstack/hyperkube:v0.20.1/hyperkube proxy
jetstack/hyperkube:v0.20.1/hyperkube scheduler
jetstack/hyperkube:v0.20.1/hyperkube apiserver
jetstack/hyperkube:v0.20.1/hyperkube controller
gcr.io/google_containers/pause:0.8.0/pause
jetstack/hyperkube:v0.20.1/hyperkube kubelet
gcr.io/google_containers/etcd:2.0.9/usr/local/bin/etcd

alt tag = Kubernetes installation

To interact with the cluster we need to use the kubectl CLI tool. It can be downloaded pre-built directly from Google (note 0.20.1 is the latest version at the time of writing).

wget https://storage.googleapis.com/kubernetes-release/release/v0.20.1/bin/linux/amd64/kubectl

chmod u+x kubetctl

sudo  mv kubectl /usr/local/bin/

The fastest way to start a container in the cluster is to use the kubectl run command. So, for example, to start an Nginx container, you can use the command and refer to the container image name.

kubectl run web –image=nginx

Although this is convenient and suitable for experimentation, resources such as pods are generally created using configuration artifacts that may have advantages from the fact that they can be versioned with GIT, thus for example facilitating rollbacks.

If YAML (or JSON) is used, a pod and its desired state are declared; it is then the job of Kubernetes to ensure that this pod always exists according to this specification until it is deleted.

Here is a simple example of configuring a pod for that nginx pod (file nginx-pod.yml).

apiVersion: v1

kind: Pod

metadata:

 name: nginx

 labels:

   app: web

spec:

 containers:

   – name: nginx

     image: nginx

This runs the same container image (nginx) as above, but in this case the resource configuration specifies some additional metadata, e.g. a label (key/value pair) that assigns this pod a label of the type “app” of “web”.

kubectl create -f nginx-pod.yml

Kubernetes, in the background, will instruct a node to extract the container image by starting the same container in turn.

At this point you will use kubectl again a few moments later and you should be able to see the nginx pod running.

$ kubectl get pods

POD    IP              CONTAINER(S)  IMAGE(S)   HOST                     LABELS     STATUS    CREATED

web1   172.17.0.1      nginx         nginx      127.0.0.1/127.0.0.1      app=web  Running   About a minute web1
As requested, an Nginx container is now running in a pod (web1-qtlc1 in this case) and it has also been assigned an IP address. In Kubernetes, each pod is assigned an IP address from an internal network and this means that the pods can communicate with each other, however, in this case we have deployed Kubernetes with Docker using the host network (-net=host) and these IP addresses will be locally accessible.

Kubernetes Cloud: Cloud services for Kubernetes, practical mini guide

Kubernetes Cloud: let’s see up close Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE) and Azure Kubernetes Service (AKS)

The most popular Cloud Services for Kubernetes are with no doubt Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE) and Azure Kubernetes Service (AKS).

These services are called “managed”. What do we mean by “managed”? Basically, whoever uploads a configuration file on a Kubernetes server, benefits from the provisioning of the whole server-side architecture made by Kubernetes itself thanks to the providers’ work.

Let’s see them up close and understand how much they can be useful to use Kubernetes at the highest level. 

Amazon Elastic Kubernetes Service (EKS)

Amazon Elastic Kubernetes Service is a relatively new service; released in 2018, it has become the main character of a steady, voluminous and incessant growth. 

With EKS it is possible to launch, execute and reshape Kubernetes applications in AWS cloud or locally. EKS is based on Kubernetes Upstream, this service is indeed available in different AWS Regions, that is data centers scattered all over the world.  

EKS also includes integrated security and cryptography options, as well as the automation of the key management activities of Kubernetes like patch application, node provisioning and varied updates. Amongst the latter, we can include the automatic update, the integration with CloudWatch for the registration, CloudTrail for the control and IAM for the access authorizations. Moreover, in order to maximize the functionalities for its customers, AWS contributes to the codebase K8s open source. 

More recently, AWS introduced a new K8s open source distribution, it is called EKS Distro and it is a new distribution option for Amazon EKS which allows you to create and manage Kubernetes clusters in your infrastructure, also including virtual machines. 

Google Kubernetes Engine (GKE)

Google Kubernetes Engine is the first cloud “managed” service of Kubernetes to impose itself in the market. We are talking about a managed environment for the distribution and management of containerized applications in a secure and flexible Google infrastructure.

GKE is nowadays considered as one of the most advanced Kubernetes platforms available on the market. Projected for Google Cloud in the first place, it can be indiscriminately distributed both in hybrid or local environments. GKE greatly simplified the cluster creation: not by chance, the tool offers some advanced features for the management of clusters, among them we can find load balancing, automatic scalability, some automatic updates like automatic server repair, registration and monitoring. 

Azure Kubernetes Service (AKS) 

Microsoft AKS is gradually becoming the most used application by Kubernetes users. Just as GKE and EKS, AKS can offer to its users an upstream K8s environment with automatic updates and cluster monitoring that can simplify the distribution and management of Kubernetes operations. AKS offers different ways of executing cluster provisioning: Web console, command line, Azure resource manager, Terraform and much more. 

You should know that Kubernetes Azure service (previously known as ACS – Azure Container Service) was born as a platform independent from the orchestrer that supports Kubernetes. Indeed, by the end of 2017, Microsoft decided to offer Kubernetes managing services and, as a consequence, to deprecate ACS by continuing to offer his “support” to Kubernetes thanks to AKS.

How Kubernetes works: operation and structure

How Kubernetes works: a mini guide to one of today’s most important tools for developers.

How does Kubernetes work? As we point out in this mini guide, Kubernetes is an important tool for developers who want to expand their skill set and who take their profession seriously. 

Let’s start by defining exactly what Kubernetes is (What is Kubernetes), that is, an open-source container-orchestration platform, designed for running large-scale distributed applications and services.

(more…)

Docker vs Kubernetes: let’s see how they differ

Docker vs Kubernetes: let’s see how they differ and why it sometimes gets a little confusing

We often hear people asking if it’s better to use Docker or Kubernetes, as if you had to choose one or the other. But that’s like comparing apples and oranges. If you’ve asked yourself this too, don’t worry – it’s a very common misunderstanding.

(more…)