A Jenkins master can operate by itself both managing the build environment and executing the builds with its own executors and resources. If you stick with this “standalone” configuration you will most likely run out of resources when the number or the load of your projects increases.
The “jenkins” user that Jenkins uses to run the jobs would have full permissions on all Jenkins resources on the master, this will introduce a “security” issue while executing jobs on the master’s executors.
For these reasons, Jenkins has a master/slave mode where we can configure other Jenkins machines to be slave machines…
Hola everyone! I am back with another demonstration to integrate different tools in the most simplified and elaborate manner.
Terraform and Ansible are two isolated tools having their own purposes but the fact that they can be integrated to solve typical use cases and the way they complement each other makes them even more popular.
It is an infrastructure as a code (IaC) software tool used for building, changing, and versioning infrastructure. It works with 500+ providers whose resources can be used to provision the infrastructure.
This article is one step ahead of my previous article. Here we will learn how to configure a load balancer on AWS using Ansible.
First, let’s understand what is a load balancer?
A load balancer does the work of routing client requests across all servers capable of fulfilling those requests in a manner that maximizes speed and capacity utilization and ensures that no one server is overworked, which could degrade performance.
A load balancer performs the following functions:
This article covers the Integration of Ansible with AWS. Here I have used AWS-EC2 instance as my base OS to launch my web page and this setup is created and managed by Ansible modules, playbooks and Roles.
♦️ Provision EC2 instance through Ansible.
♦️ Retrieve the IP Address of instance using dynamic inventory concept.
♦️ Configure the web-server through Ansible and deploy the web page to the root directory.
To begin with the practical login to your RedHat VM and install boto library in it using the command
pip3 install boto
Now to provision the EC2 instance we…
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service.
For creating a cluster on Kubernetes and launching WordPress and MYSQL on top of it tools required are as follows:
Creating a Kubernetes cluster :
First, an amazon IAM account is created with full Administration power. Download the credentials file and then use it to configure in CLI.
For launching a cluster we need to write a YML file with name cluster.yml as follows:
- name: ng-1
Creating jobs to configure ci/cd with kubernetes in Jenkins, which is started in a docker container and then using ssh to reach kubernetes on the base os.
We will first build a docker image using Dockerfile to run Jenkins. The Dockerfile can be made as follows:
RUN yum install wget -y
RUN yum install sudo -y
RUN yum install git -y
RUN wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
RUN rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
RUN yum install java-11-openjdk.x86_64 …
In this blog I have deployed two monitoring tools namely Prometheus and Grafana on top of Kubernetes. The main issue solved is that when the pod gets deleted the data is also lost to resolve this I used the PVC feature of kubernetes to make data persistent.
Let us know something about PV, PVCs, Prometheus, and Grafana:
PV: A Persistent Volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
PVCs: A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a…
Kubernetes helps in managing the pods running by itself, hence we do not require to monitor the pods or do the load balancing part (Orchestration). It also provides us with a persistent volume feature that helps us make our data persistent.
Here, in this blog, I have shared how to create a continuous integration and deployment pipeline using Kubernetes, Github, and Jenkins. As the developer uploads the code on Github, this pipeline automatically starts the respective language interpreter installed image container to deploy code on top of Kubernetes ( eg. …
Creating a machine learning model requires setting up the environment, making changes in model, compiling the model and training the model again and again. Hence, most of the machine learning projects are not implemented due to this huge amount of manual working.
So, I made an effort to find a solution to this issue.
PROBLEM STATEMENT: 1. Create container image that’s has Python3 and Keras or numpy installed using dockerfile
2. When we launch this image, it should automatically starts train the model in the container.
3. Create a job chain of job1, job2, job3, job4 and job5 using build…
Cloud DevOps enthusiast who loves to integrate different tools to solve challenges.