HA Kubernetes with Kubespray

When it comes to creating a cluster on a cloud provider, there are quite a few choices. But if you want a consistent way to create a cluster on-premise for example on vSphere as well as on AWS, then Kubespray is your best bet. Kubespray enables building Kubernetes clusters using Ansible playbooks – which means no matter where your target instances are, you can use Kubespray to provision a Kubernetes cluster. Kubespray also allows one to compose the components of a Kubernetes cluster – so you can pick and choose solutions which make sense for a particular deployment.

Introduction

Many tools exist that enable creating a Kubernetes cluster from scratch, apart from the managed offerings from major cloud providers. Let’s understand various tools and the areas in which they shine. This will also help you understand if Kubespray is the right choice for your use case and which tool to use for specific use cases.

  • Kops was originally intended for provisioning Kubernetes cluster in AWS. It now supports additional cloud providers such as GCE or Digital Ocean with varying degrees of customizability. It is still the best ways to provision a cluster on AWS and provides the end to end management of provisioning, cluster creation, and upgrades.
  • Kubeadm is one of the simplest ways to bootstrap a cluster on a set of machines which are already provisioned. Kubeadm assumes a set of best practices and enables the creation of the cluster. You will have to use a different tool to provision the infrastructure beforehand.  To get a sense of Kubeadm, I highly recommend the Kubeadm Workshop which creates a Kubernetes cluster on a set of Raspberry Pi boards.

Why Kubespray?

  • Consistency – no matter where you are setting up Kubernetes cluster. The same method can be used for setting up a cluster in AWS or on vSphere.
  • Kubespray supports all major cloud provider but more importantly vSphere, OpenStack and bare metal. Moreover, it comes with vSphere cloud provider built in which means you can natively integrate with storage and policy management of vSphere.
  • As of this writing with Kubespray you can configure one of the six network plugins. Kubespray currently supports Calico, Canal, Flannel, Weave, Contiv and Cilium.
  • Kubespray supports creating an HA cluster on a variety of OS distributions.
  • Flexibility: Kubespray provisions infrastructure using terraform and then configures the Kubernetes cluster components with Ansible. This gives you a lot of flexibility and you can add custom steps or configurations as part of infrastructure provisioning or cluster creation.

Pre-requisites

We will use AWS as an example in this tutorial to set up a Kubernetes cluster. The instance from which we will run installation process needs a few things to be present:

  • Ansible v2.4 (or newer) and python-netaddr is installed on the machine that
    will run Ansible commands
  • Jinja 2.9 (or newer) is required to run the Ansible Playbooks
  • The target servers are configured to allow IPv4 forwarding.
  • The firewalls are not managed, you’ll need to implement your own rules the way you used to. In order to avoid any issue during deployment, you should temporarily disable your firewall.
  • If kubespray is running from a non-root user account, correct privilege escalation method should be configured in the target servers.Then the ansible_become flag or --become or -b should be specified.
  • Terraform 0.8.7 or newer for infrastructure provisioning.
  • Python and pip installed on the machine
  • AWS EC2 ssh key

As part of cluster creation process, we will create a HA cluster. This cluster will provision following resources in AWS infrastructure:

  • 3 EC2 instances for kubernetes master , yes we are going to create HA cluster.
  • 3 EC2 instances for highly available etcd
  • 2 EC2 instances for bastion , as we dont want our k8 nodes be accessible from outside.
  • 4 EC2 instances that will act as worker nodes.
  • AWS LoadBalancer
  • VPC with Public and Private Subnet

The following is a diagram of the infrastructure architecture that we will create.

aws_kubespray

Setup the cluster

Provision the infrastructure

Clone the kubespray GitHub repo and install the dependencies that it needs. We will use all commands and context hereafter is from within the cloned Kubespray directory.

$ git clone git@github.com:kubernetes-incubator/kubespray.git
$ cd kubespray
$ sudo pip install -r requirements.txt 

Next, export the variables for your AWS credentials or edit the terraform’s credentials file and configure the AWS credentials contrib/terraform/aws/credentials.tfvars

export AWS_ACCESS_KEY_ID="www"
export AWS_SECRET_ACCESS_KEY ="xxx"
export AWS_SSH_KEY_NAME="yyy"
export AWS_DEFAULT_REGION="zzz"

Let’s configure the terraform variables, we can use the example file and change it suit our needs. Rename the file “contrib/terraform/aws/terraform.tfvars.example“ to “terraform.tfvars“ and update the details.  An example file used to provision a HA cluster with 3 nodes of etcd, 3 nodes of master and 3 worker nodes is shown below:

#Global Vars
aws_cluster_name = "kubespray"

#VPC Vars
aws_vpc_cidr_block = "10.250.192.0/18"
aws_cidr_subnets_private = ["10.250.192.0/20","10.250.208.0/20"]
aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]

#Bastion Host
aws_bastion_size = "t2.medium"

#Kubernetes Cluster

aws_kube_master_num = 3
aws_kube_master_size = "t2.medium"

aws_etcd_num = 3
aws_etcd_size = "t2.medium"

aws_kube_worker_num = 3
aws_kube_worker_size = "t2.medium"

#Settings AWS ELB

aws_elb_api_port = 6443
k8s_secure_api_port = 6443
kube_insecure_apiserver_address = "0.0.0.0"

Let’s run terraform init to initialize the following modules followed by terraform plan

  • module.aws-vpc
  • module.aws-elb
  • module.aws-iam
$ terraform init
$ terraform plan -out=aws_kubespray_plan

The plan command will generate a file aws_kubespray_plan depicting an execution plan of the infrastructure that will be created on AWS. If you are familiar with terraform this is a standard workflow for infrastructure using Terraform. Next let’s apply this plan so that the instances, VPCs and LBs are provisioned:

$ terraform apply "aws_kubespray_plan"

Terraform automatically creates an Ansible Inventory file called hosts with the created infrastructure in the directory inventory. Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated ssh-bastion.conf which will be generated after running ansible-playbook the first time.

Kubernetes Cluster with Flannel as CNI

The next step is to create the Kubernetes cluster on this infrastructure with customizations you need. You can configure the group_vars files in “inventory/mycluster/group_vars“ or provide the flags on CLI to customize specific things. In this case, we will change the networking plugin from the default calico to Flannel with the kube_network_plugin flag. Provide an appropriate path to hosts file generated in the previous step and then execute the following command:

ansible_ssh_private_key_file=~/path/to/ec2-key-file.pem
$ ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=core -e bootstrap_os=coreos -e kube_network_plugin=flannel -b --become-user=root --flush-cache -e ansible_ssh_private_key_file=~/path/to/ec2-key-file.pe

Conclusion

Kubespray gives a highly customizable and yet consistent way to create a Kubernetes cluster across various infrastructure platforms. It also enables people with existing knowledge of Terraform and Ansible to leverage known tools to create the clusters. Kubespray is especially useful to provision a cluster on vSphere and other on-premise setups easily.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

Enjoy this blog? Please spread the word :)