DEPLOYING MULTI-NODE KUBERNETES CLUSTER ON AWS USING ANSIBLE AUTOMATION

WELCOME

Sarvjeet Jain
9 min readApr 28, 2021

Welcome you all, Here I will cover “HOW TO DEPLOY MULTI-NODE KUBERNETES CLUSTER ON AWS CLOUD USING ANSIBLE AUTOMATION”. I have tried my best to explain each and every step of this practical very clear. This task looks so simple but trust me it’s not that much simple. So without wasting the time let’s start to implement this project.

TASK-DESCRIPTION:-

🔅 Create Ansible Playbook to launch 3 AWS EC2 Instance
🔅 Create Ansible Playbook to configure Docker over those instances.
🔅 Create Playbook to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.
🔅 Convert Playbook into roles and Upload those role.
🔅 Also Upload all the YAML code over your GitHub Repository.

PRE-REQUISITE:

Before jumping into the coding part, here are some basic pre-requisite to understand this task:

  • To understand the complete steps of this Task, you should have some basic knowledge of AWS Cloud EC2 Instance, Docker, Kubernetes Multi-Node Cluster, Ansible ROLE.
  • For doing this task, I have used RHEL 8 Linux Operating system as my Controller Node of Ansible. To make it easier I have used Putty program to run my Operating System.
  • I have installed Ansible, boto and boto3 tools inside the Controller Node of ansible. Command:-

pip3 install ansible

pip3 install boto

pip3 install boto3

  • Make Sure you should have Account on AWS Cloud.

PRACTICAL:

Let’s start running the commands & writing the codes…

I have uploaded my Ansible Roles on Git Hub, that you will find at the end of this blog. I will explain each and every bit of this task in this blog so stay tuned:

STEP-1:- Create one workspace or folder inside your Controller Node using this command:-

mkdir /task19

Now all the Files and Roles we will create inside this workspace.

STEP-2:- Inside the workspace, create one folder named as “/roles” , and inside this folder run following commands. This commands will create the Ansible Roles inside this directory-

cd /task19

mkdir /roles

cd /roles

ansible-galaxy init ec2-instance

ansible-galaxy init k8s_master

ansible-galaxy init k8s_slave

Created three Ansible Roles. One for Launching AWS EC2-instance, Second for Configuring Kubernetes Master Node and third one is for Configuring Kubernetes Slave Node.

STEP-3:- We gonna create one local configuration file inside “task19” folder & whatever Ansible commands we want to run in future we will run on this folder. Because then only Ansible will be able to read this Local configuration file & can work accordingly.

Create “ansible.cfg” file inside the “task19” directory and Put this content on it:-

vim ansible.cfg

  • Here some common key-word you can see like “host_key_checking”, “roles_path”, “ask_pass”, etc. You should be familiar with this all common keywords as I already mentioned in Pre-Requisite.
  • Here is the “private_key_file” keyword is used for aws key pair. When Ansible gonna login to AWS instances to setup K8s via SSH, then it needs the private key file. Also the default remote user of EC2 Instance is “ec2-user”.

STEP-4:- Login to your AWS Account. Search for IAM Service of AWS, service is used to create the New User of our account. Click on Add User and give some limited power and create it. It will provide the Access Key and Secret Key of the user that we will gonna use to log in inside our account using ansible.

STEP-5:- Go to EC2->KEY-PAIRS service and create one new Key Pair with whatever the name you wanna give let’s say- “task19.pem” download it and put it on our workspace. And run this command:-

chmod 400 task19.pem

STEP-6:- Create one vault file of Ansible, where we gonna put all our user credentials like “access_key” and “secret_key”, that ansible use while login. Command:-

ansible-vault create cred.yml

It will ask you to create password, create it and put the credential like that:-

access_key: XXXXXXXXXX

secret_key: XXXXXXXXXXX

Now we are ready to work on Roles.

STEP-7:- Go to “/roles/ec2-instance/tasks” and start editing the “main.yml” file. Here we gonna put all our code for launching the EC2-Instance. Code:-

Have used many variables in this code, value of this variables I putted inside the “/ec2-instance/vars/main.yml”. Screen Shot of this file you will find at last of this step.

  • Here I have used “ec2” module of ansible to launch the EC2-INSTANCE. You might be familiar with all the properties that I used inside the ec2 module, if you worked in AWS Cloud before. So I skipping the explanation of this properties.
  • Here I used “register” property to store the Output of above module “ec2” in variable named as “instance”. Used “loop” because we have to install two EC2-instances.
  • Also used “add_host” module of ansible. This module will dynamically create the Host Group while running the playbook with the name that we provided “groupname”. Used JSON parsing to fetch the IP of the instances that it launched in “host”.

Here I used two add_host, because we have to launch two nodes(master and slave). So don’t confuse in it.

  • At last have used “wait_for” module, it stops the program till the Public DNS name of the Slave Instance will not come. As we know that the Instance Launching takes some time, that’s why I used this module. It will wait till the SSH comes up.

“/ec2-instance/vars/main.yml”:-

STEP-8:- Go to “/roles/k8s_master/tasks” and start editing the “main.yml” file. Here we gonna put all the code for configuration of Master Node of Kubernetes. Code:-

  • Here we need to install kubeadm program on our master node to setup K8s cluster. So, for that I’m adding the yum repository provided by K8s community. Here as I’m using AWS Linux 2 for all the instances so we don’t need to configure repository for docker cli.
  • Next I used “package” module to install the “DOCKER, KUBEADM, IPROUTE-TC” software’s for configuring the Master Node.
  • Next “service” module to start the services of “DOCKER and KUBELET” software’s. Here again I used the loop on the list called “service_names” to run the same module twice.
  • Next used “command” module to run the command beacuse ansible doesn’t provide any module for running kubeadm program. This command will download all the images that Master Node require for configuration.
  • Next “copy” module to change the cgroup of the Docker to “systemd”. Because K8S can’t work in cgroup type of docker. And using “service” module we restarted the Docker Service. The file “daemon.json” I putted inside “/k8s_master/files”.

daemon.yml:-

  • Now using “command” module to Initialize the Kubernets Cluster.
  • Next using “command” module we are initializing the cluster & then using “shell” module we are setting up “kubectl” command on our Master Node.
  • Next using “command” module I deployed Flannel on the Kubernetes Cluster so that it create the overlay network setup.
  • Used “register” I stored the output of 2nd “command” module in a variable called “token”. Now this token variable contain the command that we need to run on slave node, so that it joins the master node.
  • Next “command” is for deleting temporary file that it will create while configuring in RAM.

“/k8s_master/vars/main.yml”-

STEP-9:- Go to “/roles/k8s_slaves/tasks” and start editing the “main.yml” file. Here we gonna put all the code for configuration of Slave Node of Kubernetes. Code:-

  • Till the Restarting of Docker Service all the steps are same as Master Node.
  • On slave node we don’t need to initialize the cluster & also we don’t need to setup kubectl. Rest things we need to do because in slave node also we need “kubeadm” command & Docker as container engine.
  • Next I used “copy” module to create one configuration file called “/etc/sysctl.d/k8s.conf” which will allow the slave to enabled certain networking rules. Next to enable the rules we need to reload the “sysctl” & for that I used “command” module.

NOW THE MAIN PART OF CODE HAS COME, HERE WE HAVE TO GIVE THE TOKEN OF MASTER NODE TO SLAVE NODE FOR JOINING THE CLUSTER. BUT THE TOKEN IS STORE IN “TOKEN” VARIABLE IN ENTIRELY DIFFERENT ROLE(K8S_MASTER). SO HOW WE CAN SOLVE IT.

  • For solving it I took help of “hostgroup” that we created while launching the EC2-INSTANCE. In “hostvars” variable ansible store all the hostgroups, inside the “group” variable all the groups present. I given my master group name and inside this group we might have many nodes, so for first one I used “0” means- “hostvars[groups[‘ec2_master’][‘0’]”. It will go inside the Master Node Role namespace and by using JSON parsing — “[‘token’][‘stdout’]” we retrieved the value of token for our Slave Node.

The variable file for both Master Node and slave node is same. So update “/k8s_slave/vars/main.yml” with same variables. And also put “daemon.json” inside “k8s_slave/files”.

STEP-10:- Now it’s time to create final playbook that will gonna run all our Roles. Create one playbook inside the workspace “task19”.

execute.yml:-

  • Using the “hostname” we gonna run all the roles.

STEP-11:- Command to run the playbook:-

ansible-playbook run execute.yml --ask-vault-pass

If your Roles don’t have any error and all the Steps you configured properly, then you will get this output:-

Finally our playbook has run successfully, now we can go inside the AWS Cloud and see the instances has launched or not:-

STEP-12:- Connect to the Master Node and run this command to see whether the node has joined or not:-

kubectl get nodes

HURRAY FINALLY WE COMPLETED OUR TASK SUCCESSFULLY….

GIT HUB REPO LINK:-

TBH It took 4 days to me for completing this task, it might look simple but trust me it is not. JSON parsing and all took too much time.

THANK YOU SO MUCH FOR READING THIS ARTICLE. AND FOR MORE SUCH TYPE OF ARTICLE STAY CONNECTED.

Don’t forget to go through my Linkdin profile:-

--

--