Continuous Integration (CI)

Continuous Integration (CI)

Continuous Integration using Jenkins, Nexus, Sonarqube, and Slack

Scenario:

The current situation of the SDLC is the Agile method.

Developers make regular code changes

These commits need to be Build and Tested

Usually, the Build and Release team will do this job.

Or, Developer's responsibility to merge and integrate code.

Problem:

Issues with the current situation:

In an Agile SDLC, there will be frequent code changes.

Not so frequently code will be tested so if there is a bug then it will be known later.

This accumulates bugs and errors in the code.

Developers need to rework to fix these bugs and errors.

Manual Build and Release process

Inter team dependencies

Solution:

Regular build and test for every commit

Automated Build and Release process

Whenever the build is done then the developers should notify automatically.

So that if any error is found then the developer stops the further development of the code and focuses on the bugs and errors with the current Build and Test.

This process is called as Continuous Integration Process.

Input to this process is any code commit and output is the well-tested artifact.

Now, these well-tested artifacts will be deployed to servers for further software testing (performance, load, etc). If everything is good then it can be promoted to production. It requires so many tools to connect to achieve this.

Benefits of CI Pipeline:

  • Short MTTR
  • No human intervention and well work with Agile team.
  • Fault detection is quick.

Tools to create sample CI Pipeline:

Jenkins: Continues integration server and main hero!

Git/GitHub: Version Control

Maven: Build tool (Eg: to build java program)

CheckStyle (out dated): Code Analysis tool

Slack: Notification

Sonatype Nexus: To store our artifact and download the dependencies for Maven. It is a software repository.

Sonarqube server: Code analysis server, scan our code in sonarqube scanner and checkstyle. Then publish our results in the Sonarqube server dashboard.

AWS EC2: To set up the Jenkin server, Nexus server, Sonarqube server, we will use AWS EC2 instances.

The architecture of Continuous Integration Pipeline:




We have breaks at every level so if any failures then the process will automatically break and the process will not continue to the next phase.

The flow of Execution:

  1. Login to AWS account
  2. Create login Key
  3. Create ServiceGroup for Jenkins, Nexus, Sonarqube VMs.
  4. Create these 3 EC2 instances for Jenkins, Nexus, Sonarqube with user data so that automatically it will be provisioned.
  5. Jenkins Post-installation, need to install some plugins.
  6. Nexus Repository Setup, 3 repositories for Maven. It will store the dependencies. Maven will automatically download the dependencies from the Maven repository which is in Sonatype Nexus. Again we will this repository to store the software after the Artifact is included. So we use Sonatype nexus for 2 reasons, one is to download the Maven Dependencies and another one is to upload our Artifacts. We can have the version in the storing Artifacts. It is as a various version of the software to deploy.
  7. Sonarqube Post installation
  8. Jenkins Steps, Build Job, Setup Slack notification integration, Checkstyle code analysis job, Setup Sonar Integration, Sonar code analysis Job, Artifact upload job.
  9. Finally, connect all jobs with Build Pipeline
  10. Set Automatic build trigger. (Eg: if there is any code change then Jenkins will automatically detect that an entire process will run).
  11. Test this with our IDE (Intellij), will make code change, and commit to seeing the entire process.
  12. Cleanup the AWS resources (otherwise it may incur cost for you in AWS)





What is an Image

What is an Image:

App binaries and dependencies.


Metadata about the image data and how to run the image.


Official Definition: "An Image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime."


Not a complete OS. there's not actually a complete OS. There's no kernel. There's no kernel modules.


It's really just the binaries that your application needs because the host provides the kernel.
That's one of the distinct characteristics around containers that makes it different from a virtual.
It's not booting up a full operating system.

It's really just starting an application, and that image can be really small.

Small as one file.

Even you make it as Big as a CentOS, Ubuntu distro with apt, and apache, PHP, and more installed.


The Mighty Hub using DOCKER Hub Registry Images:

Basics of Docker Hub (hub.docker.com)
Find official and other good public images
Download images and basics of image tags

Register and login to hub.docker.com and search for the images (Eg: nginx) 
There you can see the,

Supported tags and respective Dockerfile links

Note: // All these tags in the same line are targeting same image.
Eg:


You can pull the image by using the tags.

Eg: docker pull nginx, docker pull nginx:1.17.5.


"Explorer" tab shows you the official images,

Images and the Layers:

Image layers
Union file system
history and inspect commands
copy on write


Image layers, this is a fundamental concept of how Docker works. It uses something called the union file system to present a series of file system changes as an actual file system.

The history and inspect commands and see how we can use them to understand

PS C:\Users\MOHI> docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              775349758637        10 days ago         64.2MB
httpd               latest              d3017f59d5e2        11 days ago         165MB
nginx               alpine              b6753551581f        2 weeks ago         21.4MB
nginx               latest              540a289bab6c        2 weeks ago         126MB
alpine              latest              965ea09ff2eb        2 weeks ago         5.55MB
mysql               latest              c8ee894bd2bd        3 weeks ago         456MB
centos              latest              0f3e07c0138f        5 weeks ago         220MB
centos              7                   67fa590cfc1c        2 months ago        202MB
ubuntu              14.04               2c5e00d77a67        5 months ago        188MB
hello-world         latest              fce289e99eb9        10 months ago       1.84kB
elasticsearch       2                   5e9d896dc62c        14 months ago       479MB


PS C:\Users\MOHI> docker history nginx:latest
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
540a289bab6c        2 weeks ago         /bin/sh -c #(nop)  CMD ["nginx" "-g" "daemon…   0B
<missing>           2 weeks ago         /bin/sh -c #(nop)  STOPSIGNAL SIGTERM           0B
<missing>           2 weeks ago         /bin/sh -c #(nop)  EXPOSE 80                    0B
<missing>           2 weeks ago         /bin/sh -c ln -sf /dev/stdout /var/log/nginx…   22B
<missing>           2 weeks ago         /bin/sh -c set -x     && addgroup --system -…   57MB    // Some data changes so size differs
<missing>           2 weeks ago         /bin/sh -c #(nop)  ENV PKG_RELEASE=1~buster     0B
<missing>           2 weeks ago         /bin/sh -c #(nop)  ENV NJS_VERSION=0.3.6        0B
<missing>           2 weeks ago         /bin/sh -c #(nop)  ENV NGINX_VERSION=1.17.5     0B
<missing>           3 weeks ago         /bin/sh -c #(nop)  LABEL maintainer=NGINX Do…   0B      //Only Meta data changes since 0KB
<missing>           3 weeks ago         /bin/sh -c #(nop)  CMD ["bash"]                 0B
<missing>           3 weeks ago         /bin/sh -c #(nop) ADD file:74b2987cacab5a6b0…   69.2MB



What do I mean by image layers?
It's actually transparent completely to you when you're using Docker, but when you start digging into certain commands, like the history command, the inspect and commit, you start to get a sense that an image. 

Every image starts from the very beginning with a blank layer known as scratch.

Then every set of changes that happens after that on the file system, in the image, is another layer.

You might have one layer, you might have dozens of layers and some layers maybe no change in terms of the file size.

You'll notice on here that we actually have a change here that was simply a metadata change about.

we're starting with one layer. Every layer gets its own unique SHA that helps the system identify.

What happens if I have another image that's also using the same version of nginx?
Well, that image can have its own changes, on top of the same layer that I have in my cache. This is where the fundamental concept of the cache of image layers saves us a whole bunch of time and space. Because we don't need to download layers we already have, and remember it uses a unique SHA for each layer so it's guaranteed to be the exact layer it needs. It knows how to match them between Docker Hub and our local cache. As we make changes to our images, they create more layers.

If we decide that we want to have the same image be the base image for more layers, then it's only ever storing one copy of each layer.

In this system, really, one of the biggest benefits is that we're never storing the same image data more than once on our file system.

It also means that when we're uploading and downloading we don't need to upload and download the same layers that we already have on the other side.


DNS round robin test (Load balancing):

DNS round robin test:

Ever since Docker engine 1.11, we can have multiple containers on a created network respond to the same DNS address (Load balancing).

Using image: elasticsearch:2
Option: --net-alias search
For example, start two elasticsearch containers with "–net-alias search" and from within the network, using the ‘search’ as the DNS name for connecting, you’d get 1 of the 2 servers at random.
Run alpine nslookup search with --net to see the two containers list for the same DNS name.
Run centos curl -s search:9200 with --net multiple times until you see both "name" fileds show

// Creating a network "mohi_nw":
PS C:\Users\MOHI> docker network create mohi_nw
bdc0a10f5d4acdc26b70953ab8233c5fe858b4cc265453f278bbc670f4f52b6d

// Creating two containers of "elasticsearch":
PS C:\Users\MOHI> docker container run -d --net mohi_nw --net-alias search elasticsearch:2
Unable to find image 'elasticsearch:2' locally
2: Pulling from library/elasticsearch
05d1a5232b46: Pull complete
5cee356eda6b: Pull complete
89d3385f0fd3: Pull complete
65dd87f6620b: Pull complete
78a183a01190: Pull complete
1a4499c85f97: Pull complete
2c9d39b4bfc1: Pull complete
1b1cec2222c9: Pull complete
59ff4ce9df68: Pull complete
1976bc3ee432: Pull complete
a27899b7a5b5: Pull complete
b0fc7d2c927a: Pull complete
6d94b96bbcd0: Pull complete
6f5bf40725fd: Pull complete
2bf2a528ae9a: Pull complete
Digest: sha256:41ed3a1a16b63de740767944d5405843db00e55058626c22838f23b413aa4a39
Status: Downloaded newer image for elasticsearch:2
4fd6c4c81d858e61414d7c66af81d9ba29443d41ed62ab18586ec9cf9ca8c093

PS C:\Users\MOHI> docker container run -d --net mohi_nw --net-alias search elasticsearch:2
9d19a976898957aa57d0abcdfcee982782bd762e4cc8c6ae4d0a31fea6f71dcc

PS C:\Users\MOHI> docker container ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
9d19a9768989        elasticsearch:2     "/docker-entrypoint.…"   3 minutes ago       Up 3 minutes        9200/tcp, 9300/tcp   nostalgic_visvesvaraya
4fd6c4c81d85        elasticsearch:2     "/docker-entrypoint.…"   6 minutes ago       Up 5 minutes        9200/tcp, 9300/tcp   fervent_antonelli

Now we need to run a test to make sure we can get to both of these with the same DNS names. so creating few dockers to test DNS (nslookup and curl)

PS C:\Users\MOHI> docker container run --rm --net mohi_nw alpine nslookup search
nslookup: can't resolve '(null)': Name does not resolve

Name:      search
Address 1: 172.19.0.2 search.mohi_nw
Address 2: 172.19.0.3 search.mohi_nw

// Checking the load balancing by using centos container with the curl:

PS C:\Users\MOHI> docker container run --rm --net mohi_nw centos curl -s search:9200
Unable to find image 'centos:latest' locally
latest: Pulling from library/centos
729ec3a6ada3: Pull complete
Digest: sha256:f94c1d992c193b3dc09e297ffd54d8a4f1dc946c37cbeceb26d35ce1647f88d9
Status: Downloaded newer image for centos:latest
{
  "name" : "Arena",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "4SKj4oiTQhK4nJXPZWhPHg",
  "version" : {
    "number" : "2.4.6",
    "build_hash" : "5376dca9f70f3abef96a77f4bb22720ace8240fd",
    "build_timestamp" : "2017-07-18T12:17:44Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.4"
  },
  "tagline" : "You Know, for Search"
}
PS C:\Users\MOHI> docker container run --rm --net mohi_nw centos curl -s search:9200
{
  "name" : "Polaris",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "iLhlDVkJQP6ZuhBjdVyHAg",
  "version" : {
    "number" : "2.4.6",
    "build_hash" : "5376dca9f70f3abef96a77f4bb22720ace8240fd",
    "build_timestamp" : "2017-07-18T12:17:44Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.4"
  },
  "tagline" : "You Know, for Search"
}

PS C:\Users\MOHI> docker container run --rm --net mohi_nw centos curl -s search:9200
{
  "name" : "Polaris",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "iLhlDVkJQP6ZuhBjdVyHAg",
  "version" : {
    "number" : "2.4.6",
    "build_hash" : "5376dca9f70f3abef96a77f4bb22720ace8240fd",
    "build_timestamp" : "2017-07-18T12:17:44Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.4"
  },
  "tagline" : "You Know, for Search"
}
PS C:\Users\MOHI> docker container run --rm --net mohi_nw centos curl -s search:9200
{
  "name" : "Arena",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "4SKj4oiTQhK4nJXPZWhPHg",
  "version" : {
    "number" : "2.4.6",
    "build_hash" : "5376dca9f70f3abef96a77f4bb22720ace8240fd",
    "build_timestamp" : "2017-07-18T12:17:44Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.4"
  },
  "tagline" : "You Know, for Search"
}



///  Same DNS round robin testing with the different alias value as "test":

PS C:\Users\MOHI> docker container run -d --net mohi_nw --net-alias test elasticsearch:2
852b8032318f224f060a4faf3d8f3272040bc14a5582c3a0d7f0a0ad37347ec7
PS C:\Users\MOHI> docker container run -d --net mohi_nw --net-alias test elasticsearch:2
9417e1493b28999385a072db6f65e46e83fcf1253ed221ec5fb61eb0aa0438cb
PS C:\Users\MOHI> docker container run --rm --net mohi_nw alpine nslookup search
nslookup: can't resolve '(null)': Name does not resolve

Name:      search
Address 1: 172.19.0.2 search.mohi_nw
Address 2: 172.19.0.3 search.mohi_nw
PS C:\Users\MOHI> docker container run --rm --net mohi_nw alpine nslookup test
nslookup: can't resolve '(null)': Name does not resolve

Name:      test
Address 1: 172.19.0.5 test.mohi_nw
Address 2: 172.19.0.4 test.mohi_nw
PS C:\Users\MOHI> docker container run --rm --net mohi_nw centos curl -s test:9200
{
  "name" : "Nikki",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "tkoLDOgKQHO7qyCguKTA9g",
  "version" : {
    "number" : "2.4.6",
    "build_hash" : "5376dca9f70f3abef96a77f4bb22720ace8240fd",
    "build_timestamp" : "2017-07-18T12:17:44Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.4"
  },
  "tagline" : "You Know, for Search"
}
PS C:\Users\MOHI> docker container run --rm --net mohi_nw centos curl -s test:9200
{
  "name" : "Brothers Grimm",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "qg_eVQhISLC7UxyKTicXIQ",
  "version" : {
    "number" : "2.4.6",
    "build_hash" : "5376dca9f70f3abef96a77f4bb22720ace8240fd",
    "build_timestamp" : "2017-07-18T12:17:44Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.4"
  },
  "tagline" : "You Know, for Search"
}