Image for post
Image for post

This article is one step ahead of my previous article. Here we will learn how to configure a load balancer on AWS using Ansible.

First, let’s understand what is a load balancer?

A load balancer does the work of routing client requests across all servers capable of fulfilling those requests in a manner that maximizes speed and capacity utilization and ensures that no one server is overworked, which could degrade performance.

A load balancer performs the following functions:

  • Distributes client requests or network load efficiently across multiple servers
  • Ensures high availability and reliability by sending requests only to servers that are…


Image for post
Image for post

This article covers the Integration of Ansible with AWS. Here I have used AWS-EC2 instance as my base OS to launch my web page and this setup is created and managed by Ansible modules, playbooks and Roles.

Task Description:

♦️ Provision EC2 instance through Ansible.

♦️ Retrieve the IP Address of instance using dynamic inventory concept.

♦️ Configure the web-server through Ansible and deploy the web page to the root directory.

To begin with the practical login to your RedHat VM and install boto library in it using the command

pip3 install boto

Now to provision the EC2 instance we need to make an IAM user (here I have given it administrative powers) and its access keys. …


Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service.

For creating a cluster on Kubernetes and launching WordPress and MYSQL on top of it tools required are as follows:

  • aws cli
  • kubectl
  • eksctl

Creating a Kubernetes cluster :

First, an amazon IAM account is created with full Administration power. Download the credentials file and then use it to configure in CLI.

Image for post
Image for post

For launching a cluster we need to write a YML file with name cluster.yml as follows:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: eks-cluster
region: ap-south-1
nodeGroups:
- name: ng-1
instanceType: t2.micro
desiredCapacity: 2
ssh:
publicKeyName: mykey1122
- name: ng-2
instanceType: t2.small
desiredCapacity: 1
ssh:
publicKeyName: mykey1122
- name: ng-mixed
minSize: 2
maxSize: 5
instancesDistribution:
maxPrice: 0.017
instanceTypes: ["t3.small", "t3.medium"] # At least one instance type should be specified
onDemandBaseCapacity: 0
onDemandPercentageAboveBaseCapacity: 50
spotInstancePools: 2
ssh:
publicKeyName…


Image for post
Image for post

A Jenkins master can operate by itself both managing the build environment and executing the builds with its own executors and resources. If you stick with this “standalone” configuration you will most likely run out of resources when the number or the load of your projects increases.

The “jenkins” user that Jenkins uses to run the jobs would have full permissions on all Jenkins resources on the master, this will introduce a “security” issue while executing jobs on the master’s executors.

For these reasons, Jenkins has a master/slave mode where we can configure other Jenkins machines to be slave machines to take the load off the master Jenkins server. …


Image for post
Image for post

Here in this article I have created a Docker container on Managed Node using Ansible and deployed a web page from that container.

Let’s see how it’s done…

First, we add the IP of the Managed Node to the Inventory on Control Node.


Creating jobs to configure ci/cd with kubernetes in Jenkins, which is started in a docker container and then using ssh to reach kubernetes on the base os.

We will first build a docker image using Dockerfile to run Jenkins. The Dockerfile can be made as follows:

FROM centos:7
RUN yum install wget -y
RUN yum install sudo -y
RUN yum install git -y
RUN wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
RUN rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
RUN yum install java-11-openjdk.x86_64 …


In this blog I have deployed two monitoring tools namely Prometheus and Grafana on top of Kubernetes. The main issue solved is that when the pod gets deleted the data is also lost to resolve this I used the PVC feature of kubernetes to make data persistent.

Image for post
Image for post

Let us know something about PV, PVCs, Prometheus, and Grafana:

PV: A Persistent Volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.

PVCs: A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. …


Image for post
Image for post

Kubernetes helps in managing the pods running by itself, hence we do not require to monitor the pods or do the load balancing part (Orchestration). It also provides us with a persistent volume feature that helps us make our data persistent.

Here, in this blog, I have shared how to create a continuous integration and deployment pipeline using Kubernetes, Github, and Jenkins. As the developer uploads the code on Github, this pipeline automatically starts the respective language interpreter installed image container to deploy code on top of Kubernetes ( eg. …


Creating a machine learning model requires setting up the environment, making changes in model, compiling the model and training the model again and again. Hence, most of the machine learning projects are not implemented due to this huge amount of manual working.

So, I made an effort to find a solution to this issue.

PROBLEM STATEMENT: 1. Create container image that’s has Python3 and Keras or numpy installed using dockerfile

2. When we launch this image, it should automatically starts train the model in the container.

3. Create a job chain of job1, job2, job3, job4 and job5 using build pipeline plugin in…


Facial Recognition Model training takes a lot of time for training the weights. So,I used the concept of transfer learning to train my model from a pre trained MobileNet model to save the time. All the features extracted by this model is similar so it takes less time to train model again .

For my model I freezed the layers already present in the model and added customized fully connected layer(fcl) in the end. This reduced the amount of dataset to be provided to my model.

I used dataset having faces of five celebrities separated in training and testing(validation) images folders . …

Isha Jain

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store