<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[SREngineered - Shubham Rasal]]></title><description><![CDATA[Discover SREngineered by Shubham Rasal: a pragmatic tech blog for DevOps, SRE, Golang enthusiasts. Explore insightful tutorials, best practices, and cutting-edge solutions for engineering excellence]]></description><link>https://blog.shubhcodes.tech</link><generator>RSS for Node</generator><lastBuildDate>Wed, 08 Apr 2026 13:12:21 GMT</lastBuildDate><atom:link href="https://blog.shubhcodes.tech/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Create Simple Chat App Using UDP Protocol In Python]]></title><description><![CDATA[In this blog, we are going to create a chat server based on UDP protocol in Python.
Here’s the problem statement:
🔅 Create your own Chat Servers, and establish a network to transfer data using Socket Programing by creating both Server and Client mac...]]></description><link>https://blog.shubhcodes.tech/create-simple-chat-app-using-udp-protocol-in-python-4539cdbb1ae1</link><guid isPermaLink="true">https://blog.shubhcodes.tech/create-simple-chat-app-using-udp-protocol-in-python-4539cdbb1ae1</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Sun, 18 Apr 2021 02:31:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728599483/10d87181-2133-4396-8cc7-52f6aae0d4fc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this blog, we are going to create a chat server based on UDP protocol in Python.</p>
<p>Here’s the problem statement:</p>
<p>🔅 Create your own Chat Servers, and establish a network to transfer data using Socket Programing by creating both Server and Client machines as Sender and Receiver both. Do this program using UDP data transfer protocol.</p>
<p>🔅 Use multi-threading concept to get and receive data parallelly from both the Server Sides. Observe the challenges that you face to achieve this using UDP.</p>
<p>Before jump into code, let’s understand</p>
<h3 id="heading-what-is-udp">What is UDP?</h3>
<p>In computer networking, the User Datagram Protocol is one of the core members of the Internet protocol suite. With UDP, computer applications can send messages, in this case, referred to as datagrams, to other hosts on an Internet Protocol network.</p>
<p>It is a connectionless protocol and not reliable. It is not used in other chatting apps like Facebook and WhatsApp. The communication speed is comparatively fast than the TCP protocol. It is used in online video surfing and gaming.</p>
<h3 id="heading-creating-chat-serverone-sided-connection">Creating Chat server[One sided connection]</h3>
<p>In this setup, Server only receives messages and the client can only send messages. We can utilize the socket library in python.</p>
<p>In server, we have to write the below code, which will continue in receiving mode using while true.</p>
<pre><code class="lang-plaintext">import socket        

#AF_INET used for IPv4  
#SOCK_DGRAM used for UDP protocol  
s = socket.socket(socket.AF_INET , socket.SOCK_DGRAM )  
#binding IP and port   

s.bind(("192.168.225..242",2222))  
print("Server started ...192.168.225..242:2222")  
print("Waiting for Client response...")   
#recieving data from client  
while True:  
       print(s.recvfrom(1024))
</code></pre>
<p>In client program:</p>
<pre><code class="lang-plaintext">import socket  

#client program  

s = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)  

while True:  
       ip ,port = input("Enter server ip address and port number :\n").split()  
       m = input("Enter data to send server: ")  
       res = s.sendto(m.encode(),("192.168.225.242",2222))   
       if res:  
          print("\nsuccessfully send")
</code></pre>
<p>we can also see the working of it using the below image</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728593946/8772797d-7d57-4cbb-a764-0ec7d83868b1.png" alt /></p>
<p>You can see on the client end we really don’t need to establish a connection with the client. We can directly send data to the server.</p>
<p>But the problem here is the server can not respond back and the client can not receive any message at the same time.</p>
<p>Here’s we can take help of multi-threading, by which we can receive as well send messages simultaneously.</p>
<h3 id="heading-both-sides-communicate-using-multithreading">Both sides communicate using multithreading</h3>
<p>In python, we have a threading module, with help we can add many threads to run our program. we can also pass different functions to different threads to run.</p>
<p>Let’s understand with one help. consider the human body is a process and our parts are different threads of it. they are performing their tasks simultaneously. The liver, heart, lungs, brain are the organs involved in the process and each process is running simultaneously.</p>
<p><em># chatapp.py</em></p>
<pre><code class="lang-plaintext">import socket  
import threading  
import os  

s = socket.socket(socket.AF_INET , socket.SOCK_DGRAM )  
s.bind(("192.168.225.34",2222))  
print("\t\t\t====&gt;  UDP CHAT APP  &lt;=====")  
print("==============================================")  
nm = input("ENTER YOUR NAME : ")  
print("\nType 'quit' to exit.")  

ip,port = input("Enter IP address and Port number: ").split()  

def send():  
    while True:  
        ms = input("&gt;&gt; ")  
        if ms == "quit":  
            os._exit(1)  
        sm = "{}  : {}".format(nm,ms)  
        s.sendto(sm.encode() , (ip,int(port)))  

def rec():  
    while True:  
        msg = s.recvfrom(1024)  
        print("\t\t\t\t &gt;&gt; " +  msg[0].decode()  )  
        print("&gt;&gt; ")  
x1 = threading.Thread( target = send )  
x2 = threading.Thread( target = rec )  

x1.start()  
x2.start()
</code></pre>
<p>Here we are leveraging the power of threads to simultaneously send and receive the messages.</p>
<p>In the above code, I have two functions send() and rec() and we are starting them at the same time. The threading module gives us a Thread class and we can use that to make and start at the same time.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728596808/03977f8f-0ac1-4021-85a2-98d001fdc18a.png" alt /></p>
<p>Type quit exiting the program.</p>
<p>UDP is not reliable protocol which does not check that other system is running or not because it is connectionless and does not have any acknowledgment.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>We have successfully created a simple chat application using UDP protocol in python.</p>
<p>I would love to hear your thoughts and ideas about these topics. Don’t hesitate to share in the comment section below.</p>
<p>Also, you can always message me over <a target="_blank" href="https://www.linkedin.com/in/shubham-rasal">LinkedIn</a> as well.</p>
<p>More Ideas:</p>
<p>You can enhance this setup to send OS commands and get the output back, instead of logging in systems.</p>
<h4 id="heading-thank-you">Thank you…</h4>
]]></content:encoded></item><item><title><![CDATA[How to load variable dynamically according to os in ansible?]]></title><description><![CDATA[Ansible 10
Problem Statement:
Create an Ansible Playbook which will dynamically load the variable file named the same as OS_name and just by using the variable names we can Configure our target node.(Note: No need to use when keyword here.)
Let’s und...]]></description><link>https://blog.shubhcodes.tech/how-to-load-variable-dynamically-according-to-os-in-ansible-1d6a01425d3b</link><guid isPermaLink="true">https://blog.shubhcodes.tech/how-to-load-variable-dynamically-according-to-os-in-ansible-1d6a01425d3b</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Fri, 26 Mar 2021 20:54:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729118831/953c8a3f-d842-4a5a-8035-04b29df92637.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-ansible-10">Ansible 10</h4>
<h3 id="heading-problem-statement">Problem Statement:</h3>
<p>Create an Ansible Playbook which will dynamically load the variable file named the same as OS_name and just by using the variable names we can Configure our target node.<br />(Note: No need to use when keyword here.)</p>
<p>Let’s understand the use case of this problem. There will be many cases when some of the package names are different Linux OS distributions. Also, there will be cases that they are different file formats only such as deb or rpm.</p>
<p>To solve this problem, Ansible offers us one module named “include_vars” which loads variables from files dynamically within the task. This module can load the YAML or JSON variables dynamically from a file or directory, recursively during task runtime.</p>
<p>To detect on which operating system we are performing tasks, we can use the “setup” module. This module is automatically called by playbooks to gather useful variables about the remote hosts that can be used in playbook. This module is also available for windows target.</p>
<p>Let’s take an example of installing an Apache web server in CentOS and Ubuntu Linux. We do not want a hard-coded playbook installing a web server for a different distribution, OS, etc.</p>
<p>First, find out which variables setup module offers use by which we can detect remote host distribution, and accordingly we can load the variable file on the fly.</p>
<p>First check, can we connect both the system using the ping module.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729104689/d7097df0-02e3-46d0-9299-a7e0e88c11dd.png" alt /></p>
<p>Now, we have connectivity to both the servers, we have to get that variable name which gives us a remote host type variable. The setup module gives many variables under the “ansible_distribution” section. so let's see what variable it gives.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729107570/bf99a674-a217-4284-b1a5-3295578848ad.png" alt /></p>
<p>Now, we know the distribution name and major version, we will create variables files and give this distribution name and version name to that files. eg: Ubuntu-18.yml, CentOS-8.yml</p>
<p>Also, you can use the os_family variable which gives the os_family of remote hosts.</p>
<p>Create Ubuntu-18.yml file and add the below variables in that file</p>
<pre><code class="lang-plaintext">*# Ubuntu-18.yml*  
package_name: apache2  
service_name: apache2  
document_root: /var/www/html
</code></pre>
<p>Create CentOS-8.yml file and add below variables</p>
<p># CentOS-8.yml package_name: httpd service_name: httpd document_root: /var/www/html/</p>
<pre><code class="lang-plaintext">*# CentOS-8.yml*  
package_name: httpd  
service_name: httpd  
document_root: /var/www/html/
</code></pre>
<p>Now we have to write a playbook that will install a webserver and read variables dynamically according to distribution type.</p>
<p>Let's first make sure, we are using selecting proper files using ansible facts. so for that, we will use debug module to create a final string.</p>
<pre><code class="lang-plaintext">---  
- name: "Install webserver"  
  hosts: webserver  
  tasks:  
  - name: "Test variables"  
    debug:  
      msg: "{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml"
</code></pre>
<p>Run this playbook, <code>$ ansible-playbook main.yml</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729109479/be0744df-e949-4762-b743-2335d5a0a634.png" alt /></p>
<p>Now let’s write a playbook that will read the variables according to the OS type of remote host and install the webserver.</p>
<pre><code class="lang-plaintext">---  
- name: "Install webserver"  
  hosts: all  
  vars_files:  
     - "{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml"  

  tasks:  
    - name: "Install the web server"  
      package:  
              name: " {{ package_name }}"  
              state: present  

    - name: "Create document root directory"  
      file:   
          path: "{{document_root }}"  
          state: directory  
          recurse: yes
</code></pre>
<pre><code class="lang-plaintext"> - name: "Create index.htm page in document root"  
      copy:  
              content: "&lt;h1&gt; Welcome to {{ ansible_distribution }} server !! &lt;/h1&gt;"  
              dest: "{{ document_root }}/index.html"  

    - name: "Start the service"  
      service:  
              name: "{{ service_name }}"  
              state: started
</code></pre>
<pre><code class="lang-plaintext">- name: "Test the servers"  
  hosts: localhost  
  tasks:  
          - name: "HealthCheck the servers"  
            uri:  
               url: "http://{{item}}"  
               return_content: yes  
            with_items: "{{ groups['webserver'] }}"  
            register: output  
            failed_when: '"Welcome" not in output.content'
</code></pre>
<p>The above playbook will install and test our web server is working properly or not.</p>
<p><code>$ ansible-playbook main.yml</code></p>
<p>We can test the installation is successful or not using the curl command.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729112200/bf2111da-c846-40c3-a6a9-004ee167dbaa.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729114881/60c69613-bbe1-4f90-b72e-3d9bbcd8a29e.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729116120/fd97b3c2-d33c-47e2-a77d-6d15ba8440e4.png" alt /></p>
<h3 id="heading-conclusion">Conclusion:</h3>
<p>We have successfully made our playbook variables dynamically according to distribution and major version number without using the when keyword.</p>
<h3 id="heading-thank-you">Thank you…</h3>
<p><strong><em>About the writer:</em></strong><br /><strong><em>Shubham</em></strong> <em>loves technology, challenges, continuous learning, and reinventing himself. He loves to share his knowledge and solve daily problems using automation.</em></p>
<p><em>Visit him:</em> <a target="_blank" href="https://blog.shubhcodes.tech"><em>blog.shubhcodes.tech</em></a> <em>to know more.</em></p>
]]></content:encoded></item><item><title><![CDATA[Launch and Configure docker container using ansible-playbook]]></title><description><![CDATA[ANSIBLE-9
Launch and Deploy python flask app on docker container using ansible
Even this is a very rare use case where we need to configure the container using ansible. Enabling ssh inside the container is not a good practice, but in some cases, we m...]]></description><link>https://blog.shubhcodes.tech/launch-and-configure-docker-container-using-ansible-playbook-95607550623f</link><guid isPermaLink="true">https://blog.shubhcodes.tech/launch-and-configure-docker-container-using-ansible-playbook-95607550623f</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Thu, 21 Jan 2021 17:33:24 GMT</pubDate><enclosure url="https://cdn-images-1.medium.com/max/800/1*AGY6NFFojf6PZsdWHN4Mvw.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-ansible-9">ANSIBLE-9</h4>
<h4 id="heading-launch-and-deploy-python-flask-app-on-docker-container-using-ansible">Launch and Deploy python flask app on docker container using ansible</h4>
<p>Even this is a very rare use case where we need to configure the container using ansible. Enabling ssh inside the container is not a good practice, but in some cases, we might need to do this.</p>
<ul>
<li>My use case was to set up an ansible practical lab where I can use multiple hosts of different Linux instantly without wasting more resources. This gives me the power to launch each time fresh container and test my playbooks on multiple os distributions with diff versions.</li>
<li>One more use case will be, you have one server in development and you are deploying many microservices with multiple teams, then you can deploy microservices using container and give ssh access direct to the container to teams for troubleshooting instead of giving direct access to the server.</li>
</ul>
<h3 id="heading-problem-statement">Problem statement</h3>
<p>Write an ansible playbook to</p>
<ol>
<li>Install docker-engine on the host node.</li>
<li>Launch Container and expose it</li>
<li>Update the inventory file with container IP dynamically</li>
<li>Configure deploy python app on the container</li>
</ol>
<p>This article covers a step-step guide to solve our problem statement.</p>
<h3 id="heading-write-ansible-playbook-to-install-docker-engine">Write Ansible Playbook to install docker-engine.</h3>
<p>Here, I am writing the playbook for RedHat or CentOS.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729126958/31f3d5c5-25f4-47b8-9c46-94587c2e0474.png" alt /></p>
<p>The above playbook will add the yum docker repo and install the docker-engine community version. To handle docker containers from ansible, the requirement is to install docker SDK, to install that we will install pip, and using pip we will download docker SDK. Lastly, start a docker service.</p>
<p>$ ansible-playbook docker-configure.yml</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729129655/6f15dcd5-0ed2-4c49-9588-903e5f33de5b.jpeg" alt /></p>
<p>Create Dockerfile</p>
<p>We want to create a container in such a way that, we can connect the docker container using ssh public key authentication. Also, connect using ansible and configure the container.</p>
<p>Generate ssh key.</p>
<p>$  ssh-keygen -f ./mycontainerkey</p>
<p>The command above will create two files, private and public keys. We want to add a public key to the container.</p>
<p>Now let's write Dockerfile</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729132251/e8a0f2f6-805f-48a5-95fe-c71aab28f53e.png" alt /></p>
<p>Above Dockerfile taking ubuntu:latest as the base image. then we created one use with ‘docker’ username and add created ssh public key to authorized-keys and give docker user Sudo power inside the container.</p>
<p>As you can notice we are also using entrypoint.sh file, let’s create that also, which will start ssh service and create log files.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729134526/626b1057-7024-499e-8ac3-cf674f726ecc.png" alt /></p>
<p>Now we have Dockerfile ready, it’s time to launch the docker container using ansible and update inventory dynamically according to container IP.</p>
<p>We want to deploy a python app container on the flask server, keeping in that mind, we will expose ports according to that.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729137934/28ca5d01-fef1-4d8a-9e04-ec2676414a23.png" alt /></p>
<p>As you can see I have defined a few variables,</p>
<p>dockerfile_folder: It is the folder where we have store Dockerfile, mycontainerkey.pub, and entrypoint.sh. we want to copy all these files to the host for creating an image.</p>
<p>docker_image_name: Give the image name that we are creating from Dockerfile</p>
<p>docker_container_name: Assign container name that docker will launch</p>
<p>patting_ssh_port: Assign port number of the server that will be exposed for ssh so the team can log in to the container.</p>
<p>patting_http_port: we want to deploy a python app container, I want the client to connect to port 80 of the docker host and they will be connected to the container.</p>
<p>In this file, we are copying Dockerfile and building the image, and then launching it using the ansible docker module.</p>
<p>After that, we update the inventory file using lineinfile by adding a docker container IP address. lineinfile module searches [containers] pattern and after this line, it adds a new line. (Make empty group [containers] inside your inventory file. )</p>
<p>$ ansible-playbook docker-container.yml</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729139972/27e6e0fa-0f5f-4921-8ae8-b2f99e980337.jpeg" alt /></p>
<p>we can cross-check using the <em>$ docker ps</em> command</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729141612/f3320223-7831-4cf0-92cf-332417c12de8.jpeg" alt /></p>
<p>Let’s write a simple hello world python flask app for the demo.</p>
<p>//app.py</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729144217/a339b900-d12c-4a0f-8045-5cad3072b0e3.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729145573/76958e19-f7bd-4147-8cf9-ea2d08d1dabc.png" alt /></p>
<p>We want to deploy this application on the container we launched in the above steps. We will write one playbook which will deploy this application over the docker container.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729148123/260c242a-153d-4bd3-bee1-2f78fe8c4a70.png" alt /></p>
<p>The above playbook will copy the source code to /srv/ folder then install pip3. After that, it will install the required libraries using pip and finally run our flask app.</p>
<p>$ ansible-playbook deploymyapp.yml</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729149804/964e33ae-a2f8-41fc-afdc-0434fd745b85.jpeg" alt /></p>
<p>Now you open your browser can test the application is working or not. We have exposed the container’s 8080 port to the host machine 80 port. so copy the host machine IP and paste it into the browser.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729151579/19f3dc42-c311-48ba-b783-cd19a857cce9.jpeg" alt /></p>
<p>If you have successfully completed here… then You deserve a pat on the back.</p>
<p>Let me show you now complete directory structure, so you will get a complete idea.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/0*SGmaXQXlkzIP3xN-" alt /></p>
<p>You can find all the files in the repository at <a target="_blank" href="https://github.com/ShubhamRasal/ansible-playbooks/tree/master/Playbooks/Docker_container">Playbook/Docker_container</a> location. Don’t forget to star it, that keeps me motivated to solve challenges and to write about them.</p>
<p>[<strong>ShubhamRasal/ansible-playbooks</strong><br /><em>Contribute to ShubhamRasal/ansible-playbooks development by creating an account on GitHub.</em>github.com](https://github.com/ShubhamRasal/ansible-playbooks/tree/master/Playbooks/Docker_container "https://github.com/ShubhamRasal/ansible-playbooks/tree/master/Playbooks/Docker_container")<a target="_blank" href="https://github.com/ShubhamRasal/ansible-playbooks/tree/master/Playbooks/Docker_container"></a></p>
<p>If you have any doubts or something improvement needed in this blog, please feel free to reach out to me on LinkedIn.</p>
<p>I hope you learned something new and find ansible more interesting. Let me know your thoughts about ansible and how do plan to use ansible?</p>
<h3 id="heading-thank-you">Thank you…</h3>
<p><strong><em>About the writer:</em></strong><br /><strong><em>Shubham</em></strong> <em>loves technology, challenges, is open to learning and reinventing himself. He loves to share his knowledge. He is passionate about constant improvements.<br />He writes blogs about</em> <strong><em>Cloud Computing, Automation, DevOps, AWS, Infrastructure as code</em>.</strong><br /><a target="_blank" href="https://developer-shubham-rasal.medium.com/"><em>Visit his Medium home page to read more insights from him.</em></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729156262/b4bc9ba7-a14e-4446-953f-73ffc88b571f.png" alt /></p>
<p>👋 <a target="_blank" href="https://faun.dev/join"><strong>Join FAUN today and receive similar stories each week in your inbox!</strong></a> ️ <strong>Get your weekly dose of the must-read tech stories, news, and tutorials.</strong></p>
<p><strong>Follow us on</strong> <a target="_blank" href="https://twitter.com/joinfaun"><strong>Twitter</strong></a> 🐦 <strong>and</strong> <a target="_blank" href="https://www.facebook.com/faun.dev/"><strong>Facebook</strong></a> 👥 <strong>and</strong> <a target="_blank" href="https://instagram.com/fauncommunity/"><strong>Instagram</strong></a> 📷 <strong>and join our</strong> <a target="_blank" href="https://www.facebook.com/groups/364904580892967/"><strong>Facebook</strong></a> <strong>and</strong> <a target="_blank" href="https://www.linkedin.com/company/faundev"><strong>Linkedin</strong></a> <strong>Groups</strong> 💬</p>
<p><a target="_blank" href="https://www.faun.dev/join?utm_source=medium.com/faun&amp;utm_medium=medium&amp;utm_campaign=faunmediumbanner"><img src="https://cdn-images-1.medium.com/max/2560/1*_cT0_laE4iPcqW1qrbstAg.gif" alt /></a></p>
<h4 id="heading-if-this-post-was-helpful-please-click-the-clap-button-below-a-few-times-to-show-your-support-for-the-author">If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇</h4>
]]></content:encoded></item><item><title><![CDATA[Create a role for setting up a load balancer and web server dynamically]]></title><description><![CDATA[ANSIBLE-8
Ansible role for setting up load balancer using HAProxy and webserver using Apache software.
Introduction 🤓
In this article, we will write a simple role for configuring HAPorxy software for load balancing which will dynamically add backend...]]></description><link>https://blog.shubhcodes.tech/create-a-role-for-setting-up-a-load-balancer-and-web-server-dynamically-8f4e717eee30</link><guid isPermaLink="true">https://blog.shubhcodes.tech/create-a-role-for-setting-up-a-load-balancer-and-web-server-dynamically-8f4e717eee30</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Mon, 11 Jan 2021 03:47:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729200755/1595b46d-ca20-4f10-9296-d2780dfbce83.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-ansible-8">ANSIBLE-8</h4>
<h4 id="heading-ansible-role-for-setting-up-load-balancer-using-haproxy-and-webserver-using-apache-software">Ansible role for setting up load balancer using HAProxy and webserver using Apache software.</h4>
<h3 id="heading-introduction">Introduction 🤓</h3>
<p>In this article, we will write a simple role for configuring HAPorxy software for load balancing which will dynamically add backend servers.</p>
<p>I have also covered what is a load balancer and how to write the playbook for configuring a load balancer on AWS using a playbook.<br />You can refer to the blog to know more about load balancer and configuration.</p>
<p>[<strong>How to configure Load Balancer and webserver on AWS using Ansible Playbook?</strong><br /><em>Configure Haproxy dynamically when a new webserver gets added using ansible.</em>medium.com](https://medium.com/faun/how-to-configure-load-balancer-and-webserver-on-aws-using-ansible-playbook-60c22c0355ed "https://medium.com/faun/how-to-configure-load-balancer-and-webserver-on-aws-using-ansible-playbook-60c22c0355ed")<a target="_blank" href="https://medium.com/faun/how-to-configure-load-balancer-and-webserver-on-aws-using-ansible-playbook-60c22c0355ed"></a></p>
<p>In this blog, we will focus more on writing and using the ansible role. This is the <strong>continuation</strong> of the below blog which configures web servers. Now we want to add a load balancer that will dynamically register the webservers as backend servers.</p>
<p>[<strong>Ansible: Write Ansible role to configure apache webserver</strong><br /><em>Write Ansible role to configure apache webserver on Redhat and Ubuntu.</em>medium.com](https://medium.com/faun/ansible-write-ansible-role-to-configure-apache-webserver-9c08aaf66528 "https://medium.com/faun/ansible-write-ansible-role-to-configure-apache-webserver-9c08aaf66528")<a target="_blank" href="https://medium.com/faun/ansible-write-ansible-role-to-configure-apache-webserver-9c08aaf66528"></a></p>
<h4 id="heading-action-plan">Action plan 🔥</h4>
<ol>
<li>Install Haproxy</li>
<li>Configure load balancer</li>
<li>Register backend servers that are mentioned in the inventory file.</li>
<li>Use HAProxy and Apache roles to configure the web server and also register the server to HAProxy dramatically.</li>
<li>Override default variables.</li>
</ol>
<h3 id="heading-lets-start-the-action">Let’s start the action ☄️</h3>
<p>Initialize ansible role. Read more about <a target="_blank" href="https://docs.ansible.com/ansible/latest/cli/ansible-galaxy.html"><strong>“ansible-galaxy”</strong></a></p>
<p>#create directory and change directory<br /><strong>$ mkdir roles<br />$ cd roles  
</strong>#create ansible role<br /><strong>$ ansible-galaxy init haproxy</strong></p>
<p>This command will create a folder structure for a role.<br />First, decide the variables we are going to use</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729167318/ec143156-cb9c-45d5-abfc-531068eb2315.png" alt /></p>
<p><strong>lb_port_number</strong>: This variable is used for configuring haproxy. Clients will connect to this port number of haproxy.</p>
<p><strong>lb_inventory_group</strong>: This variable is the name of the backend server group name in ansible inventory.</p>
<p><strong>lb_backend_port_number</strong>: This variable is the value of the backend server port number.</p>
<p>Now we have declared the variables, Now we will add a configuration jinja template that will read variables from the defaults/main.yml file.<br />Also, register all the servers mentioned in the backend group of inventory.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*YkBMfVP7U-7D6n0JZjxk-w.png" alt /></p>
<p>Now we have the configuration file ready, it’s time to write tasks for installing and transfer this template to a managed node.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*UZKuP0NAfNHs7lKEMpf7JA.png" alt /></p>
<p>we have to restart the haproxy only when if there is any change or we add a new node in haproxy, for that you can notice we have called the “Restart haproxy” handler in the template task. we have to write handlers in handlers/main.yml file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729174522/92bd16a1-35d5-43c4-bfcd-436fe83c3cc3.png" alt /></p>
<p>That’s it… We have written the simple ansible role for configuring haproxy.</p>
<p>Now we will use this haproxy role and apache rule. In such a way that, first we will configure webservers then we will register those web servers in haproxy load balancer as backends. If we have a case that, we have a new webserver then it will simply configure and also register its backend server dynamically.</p>
<h4 id="heading-use-the-roles">Use the roles</h4>
<p>Create an inventory file that will have two groups for load balancer and backend servers.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729177426/c152aaf7-0c05-48a4-9361-f831fc7a8a56.jpeg" alt /></p>
<p>I have three managed nodes. We want to configure two of them as a webserver and one as the load balancer.</p>
<p>Create one file and name it as ‘inventory’. Test the connection using the ping module.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729179806/6deb7500-5f8b-4819-b8d0-40e953bd2951.png" alt /></p>
<p>Write a playbook that will use both the roles.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729181707/c6f59864-c5a0-4b0e-8a35-33bca840db88.png" alt /></p>
<p>Now we want to override default variables used in roles. There are multiple ways to do that but we will use the <strong><em>group_vars</em></strong> directory.</p>
<p>Define webservers variable in <strong>group_vars/backend/main.yml</strong> file.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*D4AGPfCJrsjw7ReV3bmm7g.png" alt /></p>
<p>Define webservers variable in <strong>group_vars/load_balancer/main.yml</strong> file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729187090/94c92d66-dc7d-4c43-9052-18be69a0abf3.png" alt /></p>
<p>write some demo file to test our load balancer is working or not.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729188617/1e7719c6-e2ed-4790-b227-3ee79a67aa9f.png" alt /></p>
<p>The above file print the IP address of the managed server, so that we can test it is balancing the load.</p>
<p>Again, let’s see what the will be directory structure?</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729190798/9def02ad-b804-419f-afb2-b79699e23673.png" alt /></p>
<p>Now it's time to run and test our playbook.</p>
<p><strong>$ ansible-playbook webserver_lb.yml -i inventory</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729195182/bb04cee7-b337-4aef-af01-de7a8272617c.png" alt /></p>
<h3 id="heading-output"><strong>Output</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729198098/1f84544e-4773-4a24-b4ad-7c39e117a74f.png" alt /></p>
<p>You can find the above playbook on this GitHub repository. Bookmark or star it which helps to keep me motivated.</p>
<p>[<strong>ShubhamRasal/ansible-playbooks</strong><br />You can find the above-used file here. check it out and don’t forget to mark a start. It helps me to maintain my motivation. Thank you | github.co](https://github.com/ShubhamRasal/ansible-playbooks/tree/master/Roles/myapache "https://github.com/ShubhamRasal/ansible-playbooks/tree/master/Roles/myapache")<a target="_blank" href="https://github.com/ShubhamRasal/ansible-playbooks/tree/master/Roles/myapache"></a></p>
<p>If you have any doubts or something improvement needed in this blog, please feel free to reach out to me on my <a target="_blank" href="https://www.linkedin.com/in/shubham-rasal/"><strong>LinkedIn account.</strong></a></p>
<p>I hope you learned something new and find ansible more interesting.<br />Let me know your thoughts about ansible and how do plan to use ansible?</p>
<h3 id="heading-thank-you">Thank you…</h3>
<p><strong><em>About the writer:</em></strong><br /><strong><em>Shubham</em></strong> <em>loves technology, challenges, is open to learning and reinventing himself. He loves to share his knowledge. He is passionate about constant improvements.<br />He writes blogs about</em> <strong><em>Cloud Computing, Automation, DevOps, AWS, Infrastructure as code</em>.</strong><br /><a target="_blank" href="https://developer-shubham-rasal.medium.com/"><em>Visit his Medium home page to read more insights from him.</em></a></p>
]]></content:encoded></item><item><title><![CDATA[Ansible: Write Ansible role to configure apache webserver]]></title><description><![CDATA[ANSIBLE-7
Write Ansible role to configure apache webserver on Redhat and Ubuntu.
What is ansible?
Ansible is an open-source IT Configuration Management, Deployment, and Orchestration tool. Ansible helps us to gain large productivity in a variety of a...]]></description><link>https://blog.shubhcodes.tech/ansible-write-ansible-role-to-configure-apache-webserver-9c08aaf66528</link><guid isPermaLink="true">https://blog.shubhcodes.tech/ansible-write-ansible-role-to-configure-apache-webserver-9c08aaf66528</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Sun, 10 Jan 2021 14:40:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729252246/f04c739f-85bd-41e3-984f-01cd8911d58f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-ansible-7">ANSIBLE-7</h4>
<h4 id="heading-write-ansible-role-to-configure-apache-webserver-on-redhat-and-ubuntu">Write Ansible role to configure apache webserver on Redhat and Ubuntu.</h4>
<h3 id="heading-what-is-ansible">What is ansible?</h3>
<p>Ansible is an open-source IT Configuration Management, Deployment, and Orchestration tool. Ansible helps us to gain large productivity in a variety of automation challenges. This tool is very simple to use and also powerful enough to automate complex multi-tier IT application environments.</p>
<p>Ansible offers a simple architecture that doesn’t require special software to be installed on nodes. It also provides a robust set of features and built-in modules which facilitate writing automation scripts.</p>
<p>[<strong>What is Ansible? How ansible is helping companies in automation?</strong><br /><em>Factories that failed to automate fell behind due to so much competition in the market. That’s why automation became…</em>developer-shubham-rasal.medium.com](https://developer-shubham-rasal.medium.com/what-is-ansible-acb573304acd "https://developer-shubham-rasal.medium.com/what-is-ansible-acb573304acd")<a target="_blank" href="https://developer-shubham-rasal.medium.com/what-is-ansible-acb573304acd"></a></p>
<p>In this article, we are going to write an Ansible <strong>role</strong> to configure the Apache webserver. The Apache HTTP server is the most widely-used web server in the world. It provides many powerful features including dynamically loadable modules, robust media support, and extensive integration with other popular software.</p>
<p>You can also refer to this article, In this article, I have explained how to write a playbook for configuring the webserver.</p>
<p>[<strong>How to configure apache webserver using ansible?</strong><br /><em>Configure Apache server using ansible-playbook | Shubham Rasal</em>developer-shubham-rasal.medium.com](https://developer-shubham-rasal.medium.com/how-to-configure-apache-webserver-using-ansible-4b7077d88505 "https://developer-shubham-rasal.medium.com/how-to-configure-apache-webserver-using-ansible-4b7077d88505")<a target="_blank" href="https://developer-shubham-rasal.medium.com/how-to-configure-apache-webserver-using-ansible-4b7077d88505"></a></p>
<p>In this guide, <strong>we are going to write an Ansible role to configure the Apache HTTP server.</strong>but,</p>
<h4 id="heading-what-is-an-ansible-role">What is an Ansible role?</h4>
<p>Roles let you automatically load related vars_files, tasks, handlers, and other Ansible artifacts based on a known file structure. Once you group your content in roles, you can easily reuse them and share them with other users.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729208937/c69f4987-519f-42f3-ac4d-1f1fb9482ded.png" alt /></p>
<p>An Ansible role has a defined directory structure with seven main standard directories. You must include at least one of these directories in each role. You can omit any directories the role does not use.</p>
<p>You can read more about the roles and directory structure in detail. <a target="_blank" href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#:~:text=Roles%20let%20you%20automatically%20load,share%20them%20with%20other%20users.">Ansible roles</a>.</p>
<h4 id="heading-lets-start-the-action">Let’s start the action</h4>
<p>Initialize ansible role. Read more about <a target="_blank" href="https://docs.ansible.com/ansible/latest/cli/ansible-galaxy.html"><strong>“ansible-galaxy”</strong></a></p>
<p>#create directory and change directory</p>
<p><strong>$ mkdir roles<br />$ cd roles</strong></p>
<p>#create ansible role</p>
<p><strong>$ ansible-galaxy init myapache</strong></p>
<p>This command will create a folder structure for a role.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729210423/f75a8f2d-b2d0-4952-b466-23da8d6cfb6e.png" alt /></p>
<p>I recommend you to read about the directory structure. Here <a target="_blank" href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#role-directory-structure">the Ansible Role directory.</a></p>
<p>We want our role to configure apache for RedHat and Debian family. We can use this role to configure HTTPD for Redhat and apache2 for Debian Linux distribution.</p>
<h4 id="heading-what-does-this-role-do">What does this Role do?</h4>
<ol>
<li>Install httpd for RedHat or centos. Install apache2 for Debian.</li>
<li>Create a custom document root folder for the new Apache VirtualHost and set up a test page.</li>
<li>Enable the new Apache VirtualHost.</li>
</ol>
<p>Let’s decide the variables that we will need.</p>
<p>we want to give access to the role user to override this variable so we will write these variables in the default/main.yml file. variables declared in this file can be overridden by role use at the time of using it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729212878/2fedfcf9-29c0-41a5-bbb8-fef29129402e.png" alt /></p>
<p>Now we have declared the variable.</p>
<p><strong>Apache Conf Template</strong></p>
<p>The <code>apache.conf.j2</code> the file is a <a target="_blank" href="https://jinja.palletsprojects.com/en/2.10.x/">Jinja 2</a> template file that configures a new Apache VirtualHost. The variables used within this template are defined in the <code>/default/main.yml</code> variable file.</p>
<p><strong><em>#template/apache.conf.j2</em></strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729214480/41071355-9fc1-4eec-858b-40f6a670d14b.png" alt /></p>
<p>Write tasks to configure httpd for the RedHat family.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729216396/82d9b0a0-edd8-4401-b3fb-59a0faadd34f.png" alt /></p>
<p>The above tasks will configure httpd on RedHat distributions like RedHat Linux or Centos Linux.</p>
<p>Now let’s write tasks for configuring apache2 for Debian.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729218282/60a898de-03b9-45e2-a647-d53ca903ace6.png" alt /></p>
<p>Now we have two task files. Now we need to use these files according to os type. We will include these files depending on the os type. So let’s include both the files (RedHat.yml and Debian.yml) in the main.yml file</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729220873/22f4bd07-d874-4b6a-bb2c-ab048b701c49.png" alt /></p>
<p>We have also notified some handlers in the above tasks file. In the ansible role, we write handlers in handlers/main.yml file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729222666/883f9e8b-7d38-4a3f-b10c-0596d80cf48e.png" alt /></p>
<p>Now we have written all the necessary files. Let’s see the completed directory structure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729224904/b0a4dcb2-b9f3-4d23-9f0a-729d71d00882.png" alt /></p>
<p>Now we can use this role, but before that, we have to tell ansible where you have stored your roles. You can add role location at ansible.cfg file.<br />You can directly add ansible roles path in /etc/ansible.cfg or you can also write ~/.ansible.cfg for user specific role.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729226548/d02571fe-cd22-41eb-bd96-004939f2132a.png" alt /></p>
<p>now check your role is register and ready to use. using the below command.</p>
<p><strong>$ ansible-galaxy role list</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729229985/c3f7587f-2b6d-48f5-a8ed-61952f6a47ee.png" alt /></p>
<p>Let’s test this ansible role.</p>
<p>I have created two instances on AWS of ubuntu and RedHat Linux. Create an inventory file and add IP addresses and usernames to the inventory file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729232143/446c2351-c91c-477f-9b4d-ab94337e0e3c.png" alt /></p>
<p>Now add the ssh key.</p>
<p><strong>$ ssh-add mykey.pem</strong></p>
<p>Test that we can successfully connect to our managed nodes or not using the ping module.</p>
<p><strong>$ ansible webserver -m ping -i inventory</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729235789/f796ad6a-6d44-44ba-ae3e-b3e8de2dfe8a.png" alt /></p>
<p>Now write one playbook which will use our role.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729238066/c45bbbf4-3284-4d6a-9656-56ee09190dd1.png" alt /></p>
<p>Now we are ready to run and configure apache servers using an ansible role.</p>
<p><strong>$ ansible-playbook mywebserver.yml -i inventory</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729240695/099f2a8c-1793-4af6-8fc9-cf36990d6a5e.png" alt /></p>
<p>Here’s a detailed output of the above playbook.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729243016/7c7e6669-223d-4e18-8e3f-07168f24e65c.png" alt /></p>
<p>You can find the above playbook on this GitHub repository. Bookmark or star it which helps to keep me motivated.</p>
<p>[<strong>ShubhamRasal/ansible-playbooks</strong><br /><em>How to write an Ansible role to configure the apache server. github.com</em>](https://github.com/ShubhamRasal/ansible-playbooks/tree/master/Roles/myapache "https://github.com/ShubhamRasal/ansible-playbooks/tree/master/Roles/myapache")<a target="_blank" href="https://github.com/ShubhamRasal/ansible-playbooks/tree/master/Roles/myapache"></a></p>
<p>If you have any doubts or something improvement needed in this blog, please feel free to reach out to me on my <a target="_blank" href="https://www.linkedin.com/in/shubham-rasal/"><strong>LinkedIn account.</strong></a></p>
<p>I hope you learned something new and find ansible more interesting.<br />Let me know your thoughts about ansible and how do plan to use ansible?</p>
<p>This article is part of configuring load balancer and webserver using ansible role. The next part you can read here…</p>
<p>[<strong>Create a role for setting up a load balancer and web server dynamically</strong><br /><em>Ansible role for setting up load balancer using HAProxy and webserver using Apache software.</em>developer-shubham-rasal.medium.com](https://developer-shubham-rasal.medium.com/create-a-role-for-setting-up-a-load-balancer-and-web-server-dynamically-8f4e717eee30 "https://developer-shubham-rasal.medium.com/create-a-role-for-setting-up-a-load-balancer-and-web-server-dynamically-8f4e717eee30")<a target="_blank" href="https://developer-shubham-rasal.medium.com/create-a-role-for-setting-up-a-load-balancer-and-web-server-dynamically-8f4e717eee30"></a></p>
<h3 id="heading-thank-you">Thank you…</h3>
<p><strong><em>About the writer:</em></strong><br /><strong><em>Shubham</em></strong> <em>loves technology, challenges, is open to learning and reinventing himself. He loves to share his knowledge. He is passionate about constant improvements.<br />He writes blogs about</em> <strong><em>Cloud Computing, Automation, DevOps, AWS, Infrastructure as code</em>.</strong><br /><a target="_blank" href="https://developer-shubham-rasal.medium.com/"><em>Visit his Medium home page to read more insights from him.</em></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729245888/c128b1df-e81b-47a9-94cb-9973581706b0.png" alt /></p>
<p>👋 <a target="_blank" href="https://faun.dev/join"><strong>Join FAUN today and receive similar stories each week in your inbox!</strong></a> ️ <strong>Get your weekly dose of the must-read tech stories, news, and tutorials.</strong></p>
<p><strong>Follow us on</strong> <a target="_blank" href="https://twitter.com/joinfaun"><strong>Twitter</strong></a> 🐦 <strong>and</strong> <a target="_blank" href="https://www.facebook.com/faun.dev/"><strong>Facebook</strong></a> 👥 <strong>and</strong> <a target="_blank" href="https://instagram.com/fauncommunity/"><strong>Instagram</strong></a> 📷 <strong>and join our</strong> <a target="_blank" href="https://www.facebook.com/groups/364904580892967/"><strong>Facebook</strong></a> <strong>and</strong> <a target="_blank" href="https://www.linkedin.com/company/faundev"><strong>Linkedin</strong></a> <strong>Groups</strong> 💬</p>
<p><a target="_blank" href="https://www.faun.dev/join?utm_source=medium.com/faun&amp;utm_medium=medium&amp;utm_campaign=faunmediumbanner"><img src="https://cdn-images-1.medium.com/max/2560/1*_cT0_laE4iPcqW1qrbstAg.gif" alt /></a></p>
<h4 id="heading-if-this-post-was-helpful-please-click-the-clap-button-below-a-few-times-to-show-your-support-for-the-author">If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇</h4>
]]></content:encoded></item><item><title><![CDATA[Why you should use Kubernetes?]]></title><description><![CDATA[Kubernetes-1
Introduction to Kubernetes. Why we need Kubernetes? Use cases of Kubernetes.
Introduction 🤓
When you talk to IT guys about containers, I am sure the next topic of conversion will be on container management and orchestration.
Hey, but wh...]]></description><link>https://blog.shubhcodes.tech/why-you-should-use-kubernetes-bf395bef52de</link><guid isPermaLink="true">https://blog.shubhcodes.tech/why-you-should-use-kubernetes-bf395bef52de</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Fri, 01 Jan 2021 05:39:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729280851/f7997dac-605b-44fc-a267-a39538704b22.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-kubernetes-1">Kubernetes-1</h4>
<h4 id="heading-introduction-to-kubernetes-why-we-need-kubernetes-use-cases-of-kubernetes">Introduction to Kubernetes. Why we need Kubernetes? Use cases of Kubernetes.</h4>
<h3 id="heading-introduction">Introduction 🤓</h3>
<p>When you talk to IT guys about containers, I am sure the next topic of conversion will be on container management and orchestration.</p>
<h4 id="heading-hey-but-what-is-a-container">Hey, but <strong>what is a container?</strong></h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729260363/c6e692b2-c7a4-4afe-8bce-d59c38e25271.jpeg" alt /></p>
<p>Linux containers are technologies that allow you to <strong>package</strong> and <strong>isolate</strong> applications with their entire runtime environment-all of the files necessary to run. This makes it easy to move the contained application between the environments(dev, staging/test, prod, etc) while retaining full functionality. Containers help reduce conflicts between your development and operations teams by separating areas of responsibility.<br />Now next question then should be,</p>
<h4 id="heading-what-is-container-orchestration">What is container orchestration?</h4>
<p>Container orchestration automates the deployment, management, scaling, and networking of containers. The companies that need to deploy and manage hundreds or thousands of containers and hosts can benefit from container orchestration.<br />Container orchestration automates and manages tasks such as:</p>
<ol>
<li>Provisioning and deployment</li>
<li>Configuration and scheduling</li>
<li>Container availability</li>
<li>Load balancing and traffic routing</li>
<li>Scaling and removing containers</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729262960/22e614fe-ff4c-417f-bebe-21117036b2b7.png" alt /></p>
<p>Now, knowing about containers and the need for orchestration leads to the next question</p>
<h3 id="heading-what-is-kubernetes"><strong>What is Kubernetes?</strong>🤯</h3>
<p><strong>Kubernetes</strong> (also known as <strong>k8s</strong> or <strong>“Kube”</strong>) is an <a target="_blank" href="https://www.redhat.com/en/topics/open-source/what-is-open-source">open-source</a> container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729265269/7ede89ff-d4c4-4254-9fbf-82f4b3caabc1.png" alt /></p>
<p>In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently.</p>
<p>Here’s how Dan Kohn, executive director of the <a target="_blank" href="https://www.cncf.io/"><em>Cloud Native Computing Foundation</em></a> <em>(CNCF),</em> <a target="_blank" href="http://bitmason.blogspot.com/2017/02/podcast-cloud-native-computing.html"><em>in a podcast with Gordon Haff</em></a><em>,</em> explained it: “Containerization is this trend that’s taking over the world to allow people to run all kinds of different applications in a variety of different environments. When they do that, they need an orchestration solution in order to keep track of all of those containers and schedule them and orchestrate them. Kubernetes is an increasingly popular way to do that.”</p>
<h3 id="heading-history-of-kubernetes"><strong>History of Kubernetes</strong></h3>
<p>The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines <a target="_blank" href="https://kubernetes.io/blog/2015/04/borg-predecessor-to-kubernetes/"><strong>over 15 years of Google’s experience</strong></a> running production workloads at scale with best-of-breed ideas and practices from the community.</p>
<p>Kubernetes was originally developed and designed by engineers at Google. Google was one of the early contributors to Linux container technology and has talked publicly about how <a target="_blank" href="https://speakerdeck.com/jbeda/containers-at-scale"><strong>everything at Google runs in containers</strong></a>. (This is the technology behind Google’s <a target="_blank" href="https://www.redhat.com/en/topics/cloud-computing/what-are-cloud-services">cloud services</a>.)</p>
<p>Google generates more than 2 billion container deployments a week, all powered by its internal platform, <a target="_blank" href="http://blog.kubernetes.io/2015/04/borg-predecessor-to-kubernetes.html"><strong>Borg</strong></a>. Borg was the predecessor to Kubernetes, and the lessons learned from developing Borg over the years became the primary influence behind much of Kubernetes technology.</p>
<p><strong><em>Fun fact: The 7 spokes in the Kubernetes logo refer to the project’s original name, “</em></strong><a target="_blank" href="https://cloudplatform.googleblog.com/2016/07/from-Google-to-the-world-the-Kubernetes-origin-story.html"><strong><em>Project Seven of Nine</em></strong></a><strong><em>.”</em></strong></p>
<iframe src="https://www.youtube.com/embed/zUJTGqWZtq0?feature=oembed" width="700" height="393"></iframe>

<h3 id="heading-why-you-need-kubernetes"><strong>Why you need Kubernetes?</strong> 🔥</h3>
<p>Containers are a good and easy way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start.</p>
<p>You have to ssh to the server, then launch the container, again ssh to another server launch another one. maintain if any container fails… then again check and launch a new container.</p>
<p>Wouldn’t it be easier if this behavior was handled by a system?</p>
<p><img src="https://cdn-images-1.medium.com/max/800/0*pMM5mC5RjaRQSblh" alt /></p>
<p>That’s how Kubernetes comes to the rescue! Kubernetes provides us an interface to run distributed systems smoothly. It takes care of scaling and failover for your application, provides deployment patterns, and more.</p>
<p>Kubernetes provides you with:</p>
<ul>
<li><strong>Service discovery and load balancing:</strong> Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.</li>
<li><strong>Storage orchestration:</strong> Kubernetes allows you to automatically mount a storage system of your choice, such as local storage, public cloud providers, and more.</li>
<li>A<strong>utomated rollouts and rollbacks:</strong> You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers, and adopt all their resources to the new container.</li>
<li><strong>Automatic bin packing:</strong> You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.</li>
<li><strong>Self-healing:</strong> Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.</li>
<li><strong>Secret and configuration management:</strong> Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.</li>
</ul>
<h3 id="heading-the-future-of-kubernetes">The Future of Kubernetes</h3>
<p>According to the CNCF, Kubernetes is now the second-largest open source project in <a target="_blank" href="https://www.cncf.io/blog/2018/03/06/kubernetes-first-cncf-project-graduate/">the world just behind Linux</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729269658/53608f5a-f0c6-4f61-a6dc-d46b16146e3e.png" alt /></p>
<p>58% of respondents of the survey conducted by CNCF are using Kubernetes in production, while 42% are evaluating it for future use. In comparison, 40% of enterprise companies (5000+) are running Kubernetes in production.</p>
<p>In production, 40% of respondents are running 2–5 clusters, 1 cluster (22%), 6–10 clusters (14%), and more than 50 clusters (13% up from 9%).</p>
<p>As for which environment Kubernetes is being run in, 51% are using AWS (down from 57%), on-premise servers (37% down from 51%), Google Cloud Platform (32% down from 39%), Microsoft Azure (20% down from 23%), OpenStack (16% down from 22%), and VMware (15% up from 1%). The graph below illustrates where respondents are running Kubernetes vs. where they’re deploying containers.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729271291/afd15efc-e57e-4292-9426-f130b58ca40a.jpeg" alt /></p>
<h4 id="heading-case-study-spotify">CASE STUDY: Spotify</h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729273676/56c9d48c-f6a9-4282-a606-6e503dcd33d6.png" alt /></p>
<h4 id="heading-challenge"><strong>Challenge</strong></h4>
<p>Spotify is an audio streaming platform launched in 2008 has grown over 200 million monthly active uses across the world. They wanted to empower creators and enable really immersive listening experience for all of the consumer that Spotify have. Spotify is an early adopter of microservices and Docker. Spotify had containerized microservices running across its fleet of VMs with a homegrown container orchestration system called <a target="_blank" href="https://github.com/spotify/helios"><em>Helios</em></a>. By late 2017, it became clear that “having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community,”</p>
<h4 id="heading-solution">Solution</h4>
<p>To solve this challenge Spotify uses Kubernetes. The migration which happens in parallel with Helios running go very smoothly as “Kubernetes fit very nicely as a compliment and it replaced Helios”. Spotify gets benefited from added velocity and reduced cost and also aligns with the rest of the industry on best practices and tools.</p>
<h4 id="heading-impact">Impact</h4>
<p>The <strong>biggest service currently running on Kubernetes takes about 10 million requests per second</strong> as an aggregate service and benefits greatly from autoscaling, says Site Reliability Engineer James Wen. Plus, he adds, “Before, teams would have to <strong>wait for an hour to create a new service</strong> and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes.” In addition, with Kubernetes’s bin-packing and multi-tenancy capabilities, CPU utilization has improved on average two- to threefold.</p>
<h3 id="heading-thank-you">Thank you.</h3>
<p><strong><em>About the writer:</em></strong><br /><strong><em>Shubham</em></strong> <em>loves technology, challenges, is open to learning and reinventing himself. He loves to share his knowledge. He is passionate about constant improvements.<br />He writes blogs about</em> <strong><em>Cloud Computing, Automation, DevOps, AWS, Infrastructure as code</em>.</strong><br /><a target="_blank" href="https://developer-shubham-rasal.medium.com/"><em>Visit his Medium home page to read more insights from him.</em></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729275963/ad88a834-cfc4-4a0e-8b46-9b8dc38b2502.png" alt /></p>
<p>👋 <a target="_blank" href="https://faun.dev/join"><strong>Join FAUN today and receive similar stories each week in your inbox!</strong></a> ️ <strong>Get your weekly dose of the must-read tech stories, news, and tutorials.</strong></p>
<p><strong>Follow us on</strong> <a target="_blank" href="https://twitter.com/joinfaun"><strong>Twitter</strong></a> 🐦 <strong>and</strong> <a target="_blank" href="https://www.facebook.com/faun.dev/"><strong>Facebook</strong></a> 👥 <strong>and</strong> <a target="_blank" href="https://instagram.com/fauncommunity/"><strong>Instagram</strong></a> 📷 <strong>and join our</strong> <a target="_blank" href="https://www.facebook.com/groups/364904580892967/"><strong>Facebook</strong></a> <strong>and</strong> <a target="_blank" href="https://www.linkedin.com/company/faundev"><strong>Linkedin</strong></a> <strong>Groups</strong> 💬</p>
<p><a target="_blank" href="https://www.faun.dev/join?utm_source=medium.com/faun&amp;utm_medium=medium&amp;utm_campaign=faunmediumbanner"><img src="https://cdn-images-1.medium.com/max/2560/1*_cT0_laE4iPcqW1qrbstAg.gif" alt /></a></p>
<h4 id="heading-if-this-post-was-helpful-please-click-the-clap-button-below-a-few-times-to-show-your-support-for-the-author">If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇</h4>
]]></content:encoded></item><item><title><![CDATA[How Ansible Tower helps to achieve automation?]]></title><description><![CDATA[ANSIBLE-6
let’s discover the automation with ansible tower use cases
Before understanding what is ansible tower, let's understand what is automation and how an ansible and ansible tower helps us to achieve that.
What is automation?

Automation happen...]]></description><link>https://blog.shubhcodes.tech/how-ansible-tower-helps-to-achieve-automation-b354bc8825ab</link><guid isPermaLink="true">https://blog.shubhcodes.tech/how-ansible-tower-helps-to-achieve-automation-b354bc8825ab</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Tue, 29 Dec 2020 11:22:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728624183/e4559ef7-3526-4a8d-bc88-5883bcd49486.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-ansible-6">ANSIBLE-6</h4>
<h4 id="heading-lets-discover-the-automation-with-ansible-tower-use-cases">let’s discover the automation with ansible tower use cases</h4>
<p>Before understanding what is ansible tower, let's understand what is automation and how an ansible and ansible tower helps us to achieve that.</p>
<h3 id="heading-what-is-automation">What is automation?</h3>
<blockquote>
<p><strong>Automation happens when one person meets a problem they never want to solve again.</strong></p>
</blockquote>
<p>I would like to define it in simpler words…<br />when someone doesn't want to repeat the same task again and again then That's where automation happens.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728609499/ee542f3d-f652-4a10-9707-8e4b82aa2f2c.jpeg" alt /></p>
<blockquote>
<p><strong>The world is automating and those who are successful in automation will win.</strong></p>
</blockquote>
<p>Let me describe it a little bit, we all know all manufacturing companies use automation in their companies. Automation has transformed factories. It gave manufacturing the ability to perform work faster, more efficiently, at high quality. Factories increased their productivity as well as quality.</p>
<p>Factories that failed to automate fell behind because of so much competition in the market.<br />That’s why automation is becoming essential for businesses to sustain.</p>
<blockquote>
<p><strong>This is not an option anymore</strong></p>
</blockquote>
<p>IT departments are also like modern factories that power today’s digital businesses. <strong>And let me make a bold statement here…</strong></p>
<p>Just like today's manufacturing companies can not compete without automation same like that the IT companies who fail to use automation soon become out of the competition.</p>
<h4 id="heading-why-automation-in-it">Why automation in IT?</h4>
<p>We can achieve a few imperative points in IT organization which are,</p>
<ol>
<li>Application delivery is fuel for growth</li>
<li>Automation simplifies process</li>
<li>Automation never sleeps</li>
<li>Don’t repeat the same task over and over</li>
<li>Speed up your workflow.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728611282/87c45d71-723f-4c88-8555-bac3ac2d09d0.png" alt /></p>
<p>Now we have somewhat an idea of why we need automation, lets see how we do it?</p>
<h3 id="heading-ansible">Ansible</h3>
<p>You need a tool that can act as a glue layer automating across services and applications no matter where they are. Here’s ansible comes to help you in that.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728613621/ea2f2805-319c-4d06-9902-9d85d4a955e2.jpeg" alt /></p>
<p>Yes, you can automate almost everything using ansible from the cloud, containers, network devices, chat applications, monitoring, configuring operating systems, deploying microservices, and much more.</p>
<p>Now if you are using ansible and your server goes down?</p>
<p>let's say we are using one server as a controller node for ansible and due to unexpected reasons our server goes down. then it causes a single point of failure and due to that, our entire automation fails.</p>
<p>To overcome this Redhat provides a cluster for ansible automation with more features called <strong>Ansible Tower</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728616188/c6a20f89-0d50-43fb-8559-76c755497ae0.png" alt /></p>
<h3 id="heading-ansible-tower">Ansible Tower</h3>
<p><strong>Ansible Tower</strong> (formerly ‘AWX’) is a web-based solution that makes <strong>Ansible</strong> even easier to use for IT teams of all kinds. It’s designed to be the hub for all of your automation tasks.</p>
<p>Ansible Tower is primarily RedHat official product which has the main purpose to ensure more security features, give control access to who can access what.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728618924/4ebab03b-e9c2-43f7-a120-be09ef9ef466.png" alt /></p>
<p>There are a lot of features of ansible such as</p>
<ul>
<li>Inventory can be graphically managed with many cloud providers.</li>
<li>It can log all of your jobs</li>
<li>It also integrates well with LDAP</li>
<li>It also provides amazing browsable REST API</li>
<li>It has command-line tools available for easy integration in CI/CD such as Jenkins</li>
<li>Can create approval-based workflows</li>
<li>Integrated notification like slack, Hipchat</li>
<li>Schedule ansible jobs</li>
</ul>
<blockquote>
<p><em>Ansible Tower has allowed us to provide better operations and security to our clients. It has also increased our efficiency as a team.<br /> — NASA</em></p>
</blockquote>
<h3 id="heading-ansible-tower-use-cases"><strong>Ansible Tower Use Cases</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728621471/9a91cca8-b10f-4c55-951d-8fea8f0ad4dc.png" alt /></p>
<ol>
<li>We can create a template to create cloud infrastructure and provision instances on various clouds such as AWS, Azure, Alibaba, Oracle, etc</li>
<li>We can also create approval workflows such as,<br />- launch microservices in a development environment<br />- send a notification to slack for approval to deploying it into QA/Pre-prod<br />- once approved then deploy it to QA/Pre-prod<br />- Notify each step to slack or other platforms.</li>
<li>We can store credentials for different purposes such as cloud credentials, ssh key, git credentials, but in case there are not pre-defined credentials then you can also create customized credentials.</li>
<li>We can also schedule our tasks of executing the playbooks or jobs.</li>
<li>We can create a job workflow of multiple jobs such as provision VMs, then configuring VMs</li>
<li>We can also see performance, monitors tower, memory usage.</li>
<li>We can also make effective and efficient usage of dynamic inventory at very greater ease.</li>
</ol>
<h3 id="heading-thank-you">Thank you.</h3>
<p><strong><em>About the writer:</em></strong><br /><strong><em>Shubham</em></strong> <em>loves technology, challenges, is open to learning and reinventing himself. He loves to share his knowledge. He is passionate about constant improvements.<br />He writes blogs about</em> <strong><em>Cloud Computing, Automation, DevOps, AWS, Infrastructure as code</em>.</strong><br /><a target="_blank" href="https://developer-shubham-rasal.medium.com/"><em>Visit his Medium home page to read more insights from him.</em></a></p>
]]></content:encoded></item><item><title><![CDATA[How to configure Load Balancer and webserver on AWS using Ansible Playbook?]]></title><description><![CDATA[ANSIBLE-5
Configure Haproxy dynamically when a new webserver gets added using ansible.
Before starting how to configure the load balancer and web server let’s understand what is load balancer and webservers.
Load Balancer
The load balancer is softwar...]]></description><link>https://blog.shubhcodes.tech/how-to-configure-load-balancer-and-webserver-on-aws-using-ansible-playbook-60c22c0355ed</link><guid isPermaLink="true">https://blog.shubhcodes.tech/how-to-configure-load-balancer-and-webserver-on-aws-using-ansible-playbook-60c22c0355ed</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Thu, 24 Dec 2020 19:11:05 GMT</pubDate><enclosure url="https://cdn-images-1.medium.com/max/800/1*jGqRtYySDvFKVjLTMTDVzg.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-ansible-5">ANSIBLE-5</h4>
<h4 id="heading-configure-haproxy-dynamically-when-a-new-webserver-gets-added-using-ansible">Configure Haproxy dynamically when a new webserver gets added using ansible.</h4>
<p>Before starting how to configure the load balancer and web server let’s understand what is load balancer and webservers.</p>
<h3 id="heading-load-balancer">Load Balancer</h3>
<p>The load balancer is software by which we can distribute some tasks to a set of resources to achieve more efficiency in the overall processing.</p>
<h4 id="heading-why-we-need-a-load-balancer">Why we need a load balancer?</h4>
<p>Suppose you have a single server and it has standard hardware and software... and as you grow or suddenly traffic for your app increases. Now each server has some limitations for serving requests at a time, it can be hardware as well as software. Now you decided to add one more server and it comes with a new IP address and then you again need to add this IP as <em>A record</em> in the domain name provider such as godaddy.com or namecheap.com or AWS route 53.</p>
<p>Now you gain more and more clients and more web servers you start adding then it gets complicated… because we need to add these new IPs in the update the DNS provided by the domain name provider. But to reflect these changes in DNS it takes some time (hell lot of time)</p>
<p>Due to a lot of stuff and more complexity... we want something that will manage our servers and face the client's requests and then spread requests to our webservers and again respond back to clients. — — <strong>A Load Balancer.</strong></p>
<p>we can use this load balancing in many places such as for web servers or you can use it in API servers or database servers.</p>
<p>So we are going to use “HAPROXY” Load Balancer because it free, open-source software that provides a high availability load balancer and proxy server for TCP and HTTP-based application that spreads requests across multiple servers. It is written in C and very fast and efficient and uses the <strong>R</strong>ound <strong>R</strong>obin algorithm.</p>
<p>Let's understand the action plan first… before getting hands dirty.</p>
<h4 id="heading-problem-statement"><strong>Problem Statement:</strong></h4>
<ul>
<li><strong>Use Ansible playbook to Configure Reverse Proxy i.e. Haproxy and update its configuration file automatically on each time new Managed node (Configured With Apache Webserver) join the inventory.</strong></li>
<li>And set up this on AWS cloud.</li>
</ul>
<p>I have created a small video explaining how to configure load balancer using ansible-playbook?</p>
<iframe src="https://www.youtube.com/embed/UVtVSwzoZQ4?feature=oembed" width="700" height="393"></iframe>

<p>I am assuming that you have knowledge about AWS. You can check the below article to launch instances using AWS CLI.</p>
<p>[<strong>What is AWS CLI? How to use AWS CLI?</strong><br /><em>Launch EC2 instance, create EBS and attach EBS volume to EC2 instance using aws CLI.</em>developer-shubham-rasal.medium.com](https://developer-shubham-rasal.medium.com/what-is-aws-cli-how-to-use-aws-cli-6f1bdedabd2b "https://developer-shubham-rasal.medium.com/what-is-aws-cli-how-to-use-aws-cli-6f1bdedabd2b")<a target="_blank" href="https://developer-shubham-rasal.medium.com/what-is-aws-cli-how-to-use-aws-cli-6f1bdedabd2b"></a></p>
<p>So let’s understand the architecture we need to solve this use case.</p>
<ol>
<li>We have two security groups. One for the load balancer and the second for webservers. Create a security group for load balancer allowing ports 80 and 5000 port and 22 port for ssh. Create another security group and allow load balancer group in its inbound rule for port 5000 and 22 port for ssh. so that outsiders/clients can not directly connect to our webservers.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729287546/fb109f14-d2c1-455c-aca2-d28734809eb8.png" alt /></p>
<p>2. As you can imagine the architecture we want to create from the above image.. So create one instance as a load balancer and assign a load balancer security group. Create 2–3 instances and assign a webserver security group.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729289989/84581f24-683c-4a8d-a82f-3f17001ac449.png" alt /></p>
<p>Now we have done with AWS setup… So let's start with Ansible and write some interesting playbooks. I am using my local machine as the controller node here.</p>
<p>You can get a brief idea about the setup from the below diagram.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729292000/f2150218-c347-4816-bed0-361427db4c17.png" alt /></p>
<p>We want to configure webservers using apache server and then add that new server entry in load balancer that way load balancer can also spread requests to the new node.</p>
<p>Now we have a private key file of instances but our ansible should be able to connect to these instances...<br />The simplest method would be to add your own public keys to your EC2 instance and ignore the PEM file for all future logins</p>
<p><strong>$ ssh-add keyfile.pem</strong> </p>
<p>now you are ready to connect using ssh as well as from ansible also.</p>
<p>so the next step is to add instances IP in the inventory file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729293586/13467a35-9868-4a2d-9232-90a88487ada4.png" alt /></p>
<p>Check the connectivity using</p>
<p><strong>$ ansible all -m ping</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729295475/4c238ef3-78fe-472b-b669-0d9581ef56a9.png" alt /></p>
<p>Now let’s see the folder structure I have created for configuring load balancers as well as web servers.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729299248/81a0abea-de89-47c3-91d8-93176d0b0410.png" alt /></p>
<p>Here, we have load balancer directory will have load balancer specific files i.e configuration jinja template file and variables file, and then webservers directory will have its configuration template and variables file.</p>
<p>source code has simple index.php which echoes the IP address of the machine.. so we can identify easily that our setup is working or not.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729301570/5f652897-faaf-4f3f-bf5b-d2168786a6ba.png" alt /></p>
<p>now let’s see how to configure the webserver<br />You can read more in brief about configuring web server in the below article</p>
<p>[<strong>How to configure apache webserver using ansible?</strong><br /><em>Configure Apache server using ansible-playbook | Shubham Rasal</em>developer-shubham-rasal.medium.com](https://developer-shubham-rasal.medium.com/how-to-configure-apache-webserver-using-ansible-4b7077d88505 "https://developer-shubham-rasal.medium.com/how-to-configure-apache-webserver-using-ansible-4b7077d88505")<a target="_blank" href="https://developer-shubham-rasal.medium.com/how-to-configure-apache-webserver-using-ansible-4b7077d88505"></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729304332/01629a3b-7fa2-4ffb-94d8-dca9ab4390f7.png" alt /></p>
<p>This is the configuration file of the apache server where we are changing the document root and port for our project.</p>
<p>Now let’s see web servers vars.yml file</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729306927/391c3eb1-a08e-49b4-9d78-1772be0e2c64.png" alt /></p>
<p>In this file, we are giving the project name, source code directory path, config file path, the port number where we want to deploy our webserver, and the document root of the project on the server.</p>
<p>also, we have to make an entry of the webserver after it is configured as a webserver into the load balancer file so the load balancer can use that webserver. for that we have to update its config file. To dynamically update its file again we will use the jinja template.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729308904/a0e25097-d68d-40db-a569-cc8663e66d5e.png" alt /></p>
<p>we need to add these jinja lines into haproxy.cfg file. It will read the IP addresses from the webserver group in an inventory file and automatically add all the entries in the load balancer.</p>
<p>Now let us see ansible-playbook.</p>
<p>The above playbook will download apache and PHP on webservers then transfer the .conf file and configure the webserver.</p>
<p>In the second play, it will configure a load balancer and also register the webserver into the load balancer.</p>
<p>To see the output, I recommend you to watch the video mentioned above.</p>
<p>You can find all the code and files in below Github repository.</p>
<p>[<strong>ShubhamRasal/ansible-playbooks</strong><br /><em>Ansible playbook to configure a load balancer and webservers. github.com</em>](https://github.com/ShubhamRasal/ansible-playbooks/tree/master/load_balancer "https://github.com/ShubhamRasal/ansible-playbooks/tree/master/load_balancer")<a target="_blank" href="https://github.com/ShubhamRasal/ansible-playbooks/tree/master/load_balancer"></a></p>
<p>If you have any doubts or something improvement needed in this blog, please feel free to reach out to me on my <a target="_blank" href="https://www.linkedin.com/in/shubham-rasal/"><strong>LinkedIn account.</strong></a></p>
<p>I hope you learned something new and find ansible more interesting.<br />Let me know your thoughts about ansible and how do plan to use ansible?</p>
<h3 id="heading-thank-you">Thank you.</h3>
<p><strong><em>About the writer:</em></strong><br /><strong><em>Shubham</em></strong> <em>loves technology, challenges, is open to learning and reinventing himself. He loves to share his knowledge. He is passionate about constant improvements.<br />He writes blogs about</em> <strong><em>Cloud Computing, Automation, DevOps, AWS, Infrastructure as code</em>.</strong><br /><a target="_blank" href="https://developer-shubham-rasal.medium.com/"><em>Visit his Medium home page to read more insights from him.</em></a></p>
<p><img src="https://cdn-images-1.medium.com/max/800/0*Piks8Tu6xUYpF4DU" alt /></p>
<p>👋 <a target="_blank" href="https://faun.dev/join"><strong>Join FAUN today and receive similar stories each week in your inbox!</strong></a> ️ <strong>Get your weekly dose of the must-read tech stories, news, and tutorials.</strong></p>
<p><strong>Follow us on</strong> <a target="_blank" href="https://twitter.com/joinfaun"><strong>Twitter</strong></a> 🐦 <strong>and</strong> <a target="_blank" href="https://www.facebook.com/faun.dev/"><strong>Facebook</strong></a> 👥 <strong>and</strong> <a target="_blank" href="https://instagram.com/fauncommunity/"><strong>Instagram</strong></a> 📷 <strong>and join our</strong> <a target="_blank" href="https://www.facebook.com/groups/364904580892967/"><strong>Facebook</strong></a> <strong>and</strong> <a target="_blank" href="https://www.linkedin.com/company/faundev"><strong>Linkedin</strong></a> <strong>Groups</strong> 💬</p>
<p><a target="_blank" href="https://www.faun.dev/join?utm_source=medium.com/faun&amp;utm_medium=medium&amp;utm_campaign=faunmediumbanner"><img src="https://cdn-images-1.medium.com/max/2560/1*_cT0_laE4iPcqW1qrbstAg.gif" alt /></a></p>
<h4 id="heading-if-this-post-was-helpful-please-click-the-clap-button-below-a-few-times-to-show-your-support-for-the-author">If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇</h4>
]]></content:encoded></item><item><title><![CDATA[How to configure apache webserver using ansible?]]></title><description><![CDATA[ANSIBLE-4
Configure Apache server using ansible-playbook
Introduction
Redhat Ansible
Ansible is an open-source automation tool by which we can automate all cloud provisioning, configuration management, application deployment,intra-service orchestrati...]]></description><link>https://blog.shubhcodes.tech/how-to-configure-apache-webserver-using-ansible-4b7077d88505</link><guid isPermaLink="true">https://blog.shubhcodes.tech/how-to-configure-apache-webserver-using-ansible-4b7077d88505</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Thu, 17 Dec 2020 20:26:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728644334/f985626a-ea88-40ec-b8f8-3a9fce4fea99.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-ansible-4">ANSIBLE-4</h4>
<h4 id="heading-configure-apache-server-using-ansible-playbook">Configure Apache server using ansible-playbook</h4>
<h3 id="heading-introduction">Introduction</h3>
<h4 id="heading-redhat-ansible">Redhat Ansible</h4>
<p>Ansible is an open-source automation tool by which we can automate all cloud provisioning, configuration management, application deployment,intra-service orchestration, and many more IT needs.</p>
<p>Ansible is an open-source community project sponsored by Red Hat, it’s the simplest way to automate IT. Ansible is the only automation language that can be used across <strong>entire IT teams</strong> from systems and network administrators to developers and managers.<br />You can read more about ansible in the below article and how it helps in DevOps.</p>
<p>[<strong>What is Ansible? How ansible is helping companies in automation?</strong><br /><em>Factories that failed to automate fell behind due to so much competition in the market. That’s why automation became…</em>developer-shubham-rasal.medium.com](https://developer-shubham-rasal.medium.com/what-is-ansible-acb573304acd "https://developer-shubham-rasal.medium.com/what-is-ansible-acb573304acd")<a target="_blank" href="https://developer-shubham-rasal.medium.com/what-is-ansible-acb573304acd"></a></p>
<h4 id="heading-apache-webserver">Apache Webserver</h4>
<p>Apache is an open-source and free web server software that <strong>powers around 40% of websites</strong> around the world. The official name is <a target="_blank" href="https://httpd.apache.org/"><strong>Apache HTTP Server</strong></a>, and it’s maintained and developed by the Apache Software Foundation. It allows website owners to serve content on the web — hence the name “webserver.” It’s one of the oldest and most reliable web servers, with the first version released more than 20 years ago, in 1995.</p>
<h3 id="heading-what-do-we-want-to-do">What do we want to do?</h3>
<ol>
<li>Install an Apache web server on the managed node</li>
<li>Configure webserver. change the document root and port number.</li>
<li>Restart the webserver</li>
</ol>
<p><strong>but restarting HTTPD Service is not idempotence in nature and also consumes more resources so we have to suggest a way to rectify this and solve challenges in the Ansible playbook.</strong></p>
<h3 id="heading-action">Action</h3>
<h4 id="heading-create-entry-of-managed-node-in-the-inventory-file">Create entry of managed node in the inventory file.</h4>
<p>You can refer to this article on how to create ssh key and use ssh public key authentication.</p>
<p>[<strong>Launching Docker Container Using Ansible</strong><br /><em>Launch Apache webserver docker container using ansible</em>developer-shubham-rasal.medium.com](https://developer-shubham-rasal.medium.com/launching-docker-container-using-ansible-372bbdca9165 "https://developer-shubham-rasal.medium.com/launching-docker-container-using-ansible-372bbdca9165")<a target="_blank" href="https://developer-shubham-rasal.medium.com/launching-docker-container-using-ansible-372bbdca9165"></a></p>
<p>Create a new group named “webserver” and add a managed node server IP address there.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728633113/c9588d01-62b8-4098-ac26-739c860e6c30.png" alt /></p>
<h4 id="heading-write-an-ansible-playbook">Write an ansible-playbook</h4>
<p>This is my directory structure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728634664/d76e1b47-27ca-42f6-b903-8bc9779201d7.png" alt /></p>
<p>vars.yml file has all the variables where webserver.yml is ansible-playbook. creatorsbyheart.conf.j2 is a jinja template for changing document root and port number. my site is source-code-directory.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728636935/459698e2-2a49-4c20-846a-b7cb78996003.png" alt /></p>
<p>here we have to mention the project or website name for creating the conf file in the configuration directory.<br />source_code_path is the source code directory location.<br />webserver_conf_file is the jinja template file location<br />and port_number and document_root_location for configuration the webserver on that location.</p>
<p>Now let’s take a look at the conf file for our website</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728639405/bc7daf3f-092f-4dd3-84da-fd9c843f7b8a.png" alt /></p>
<p>here we are reading the port number and document root from the vars.yml file through ansible-playbooks.</p>
<p>Here we are configuring the apache server using ansible.</p>
<p>In ansible, If we restart service using ansible service module… it restarts the service again. i.e it is not ideompotenet in nature. because we only to restart the webserver only if anything is changed.</p>
<p>so for that, we have used handlers that only get notified when we change the conf file. This way we are restarting the webserver each time we run the playbook can be avoided.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728641832/2fa6e5c6-2ace-4786-8b5d-007e8080f5d4.png" alt /></p>
<p>I have configured only for the RedHat family you can change variables and software names according to your managed node os_family.</p>
<p>You can find the above playbook on this GitHub repository. Bookmark or star it for future use.</p>
<p>[<strong>ShubhamRasal/ansible-playbooks</strong><br /><em>Contribute to ShubhamRasal/ansible-playbooks development by creating an account on GitHub.</em>github.com](https://github.com/ShubhamRasal/ansible-playbooks/blob/master/Docker/docker-configure.yml "https://github.com/ShubhamRasal/ansible-playbooks/blob/master/Docker/docker-configure.yml")<a target="_blank" href="https://github.com/ShubhamRasal/ansible-playbooks/blob/master/Docker/docker-configure.yml"></a></p>
<p>If you have any doubts or something improvement needed in this blog, please feel free to reach out to me on my <a target="_blank" href="https://www.linkedin.com/in/shubham-rasal/"><strong>LinkedIn account.</strong></a></p>
<p>I hope you learned something new and find ansible more interesting.<br />Let me know your thoughts about ansible and how do plan to use ansible?</p>
<h3 id="heading-thank-you">Thank you.</h3>
<p><strong><em>About the writer:</em></strong><br /><strong><em>Shubham</em></strong> <em>loves technology, challenges, is open to learning and reinventing himself. He loves to share his knowledge. He is passionate about constant improvements.<br />He writes blogs about</em> <strong><em>Cloud Computing, Automation, DevOps, AWS, Infrastructure as code</em>.</strong><br /><a target="_blank" href="https://developer-shubham-rasal.medium.com/"><em>Visit his Medium home page to read more insights from him.</em></a></p>
]]></content:encoded></item><item><title><![CDATA[How to configure Hadoop Cluster using Ansible?]]></title><description><![CDATA[How to configure Hadoop Cluster using Ansible?
Ansible -3
In this article, we will configure the Hadoop name node and data node using ansible. — Shubham Rasal
Introduction
We are going to write playbooks for installing Hadoop and configuring both the...]]></description><link>https://blog.shubhcodes.tech/how-to-configure-hadoop-cluster-using-ansible-58d942c59ac0</link><guid isPermaLink="true">https://blog.shubhcodes.tech/how-to-configure-hadoop-cluster-using-ansible-58d942c59ac0</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Thu, 17 Dec 2020 08:15:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729355393/1f9cac7c-b58d-4a44-815a-856d18ad6d0b.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>How to configure Hadoop Cluster using Ansible?</p>
<h4 id="heading-ansible-3">Ansible -3</h4>
<p>In this article, we will configure the Hadoop name node and data node using ansible. — Shubham Rasal</p>
<h3 id="heading-introduction">Introduction</h3>
<p>We are going to write playbooks for installing Hadoop and configuring both the name node and data nodes.</p>
<p>I am assuming you have installed ansible and set the inventory file path.</p>
<p>[<strong>What is Big Data? How big companies manage Big Data?</strong><br /><em>How big MNC’s like Google, Facebook, Instagram, etc stores, manages, and manipulate Thousands of Terabytes of data with…</em>developer-shubham-rasal.medium.com](https://developer-shubham-rasal.medium.com/what-is-big-data-how-big-companies-manage-big-data-21124d639d50 "https://developer-shubham-rasal.medium.com/what-is-big-data-how-big-companies-manage-big-data-21124d639d50")<a target="_blank" href="https://developer-shubham-rasal.medium.com/what-is-big-data-how-big-companies-manage-big-data-21124d639d50"></a></p>
<p>In this article, I have given high-level details about what is Hadoop and why it is used? Check it if you want to know more about Hadoop.</p>
<p>so without delay start writing our playbooks.</p>
<h3 id="heading-action">Action</h3>
<h3 id="heading-action-mode">Action mode 🔥</h3>
<p>We will use ssh public key authentication to configure managed/target nodes. The motivation to use public-key authentication over simple passwords is security. Public key authentication provides cryptographic strength that even extremely long passwords can not offer. With SSH we also don’t need to remember long passwords.</p>
<p>So let’s create one ssh key using the below command for that…<br />Go to the .ssh folder</p>
<p><strong>$ cd .ssh$ ssh-keygen -t rsa -b 4096</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729324242/65026c81-2673-4377-9690-5f44bddb5472.jpeg" alt /></p>
<p>Now it will generate two files private and public keys in the .ssh file.</p>
<p>Now we have to copy the public key to the managed node for ssh authentication. for that we will use the ssh-copy-id command or else you can use the SCP command to authorized files also.</p>
<p><strong>$ssh-copy-id -i ansible_key.pub username@managed_node_ip</strong></p>
<p>It will ask first the time user password and enter the password then you are ready to go. above command append your id to ssh authorized file of a managed node.</p>
<p>In my case, my managed node IP address is <strong>192.168.225.182.  
</strong>You can check IP address using the <em>$ ifconfig</em> command</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729326803/c2f8c91f-5c76-4654-90a6-3ab3627e4033.jpeg" alt /></p>
<p>Now let’s test that we can connect manually to the managed node before moving to an ansible setup.</p>
<p><strong>$ ssh username@hostname</strong></p>
<p>using the above command you can test that we have successfully transferred the ssh public key or not. You can see that the managed node authorized_keys file has your generated public key.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/0*TFhNwcpMJ5ndWhFd.jpeg" alt /></p>
<p>Now we have set up ssh connection now we are ready to connect and configure the managed node using ansible.</p>
<p>Follow the same process for other nodes to get secure connections. In our case, we have two managed nodes one is the Hadoop name node and the data node.</p>
<p>Now create two groups in the inventory file and put IPs in those groups.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729331316/a27a2f09-4cff-45db-b9cf-7b5c3108932f.png" alt /></p>
<p>let’s test before moving forward do we have connectivity or not using the ping module.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*e8rqPan7SqN0y5yJevtjaw.jpeg" alt /></p>
<p>yes… now we can move forward</p>
<p>Now we are all set to write the playbook … then let's start.</p>
<p>Now I have partitioned this task into two parts,<br />1. Installation<br />2. Configuration</p>
<p>The installation will be the same on both the machines and the configuration might have to change according to need.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*HapKo1vUCGqIspzypaKyYg.png" alt /></p>
<p>This is a directory structure where hadoop-install.yml will have installation code and hadoop-configuration.yml will contains configuration code.</p>
<p>We have two directories 1. name node which will contain files for name node and 2. data node which will have data-node specific files.<br />and we have JDK and Hadoop rpm files which we are going to install.</p>
<p>Let's begin playbooks then installing Hadoop.<br /><strong><em>//hadoop-install.yml</em></strong></p>
<p>The above playbook will detect the user directory and copy JDK and Hadoop and then if they are successfully transferred it will install them.</p>
<p>Now let’s run the above playbook</p>
<p>$ ansible-playbook hadoop-install.yml</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729337329/11b019a6-1bda-4923-a4fc-e82e1d71e637.jpeg" alt /></p>
<p>As you can see now we have installed hadoop on both the name node and data node. Let’s confirm it on both the nodes that we have successfully installed or not.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729340103/2e205604-53ce-4746-bb35-192139d89c25.jpeg" alt /></p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*0ju-oRH_mgoBkL6ORiJG-g.jpeg" alt /></p>
<p>yes, now we have installed hadoop… its time to configure the name node and data node.</p>
<p>now here's we have to declare few variables..</p>
<p>//vars.yml</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729345068/d8c6ba8e-c67b-488b-a0f1-7b0a6cf12a36.png" alt /></p>
<p>This file now only one variable but you can add more and more variables that are common to both nodes here.</p>
<h4 id="heading-configure-namenode">Configure Namenode:</h4>
<p>Now we have a different vars.yml file for name node where we will store name node-specific variables.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729347184/e1299671-22b4-4811-ad49-035a175278db.png" alt /></p>
<p>we have to edit two files in name node 1. core-site.xml and hdfs-site.xml<br />we will make it dynamic using jinja templating.</p>
<p>above file will read name node IP from inventory file and port from global vars.yml file</p>
<p>above file is reading namenode_directory variable from we declared in namenode/vars.yml.</p>
<h4 id="heading-configure-datanode">Configure Datanode:</h4>
<p><strong>//datanode/vars.yml</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729350327/c6b7c42a-a7b0-4bc6-b946-9a3638dc47ad.png" alt /></p>
<p><strong>//datanode/core-site.xml</strong></p>
<p><strong>//datanode/hdfs-site.xml</strong></p>
<p>now we have created templates and variables let's configure the nodes.</p>
<p><strong>hadoop-configuration.yml</strong></p>
<p>Here we are creating a directory and transferring templates and starting Hadoop service on both nodes according to their type</p>
<p>#to run only for name node<br />$ ansible-playbook hadoop-configuration.yml --tags namenode</p>
<p>#to run only for data node<br />$ ansible-playbook hadoop-configuration.yml --tags datanode</p>
<p>#to run full playbook<br />$ ansible-playbook hadoop-configuration.yml</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729351836/f57a8ddb-e93b-4007-986a-4ff3a6017736.png" alt /></p>
<p>And we are done… we have successfully completed the configuration setup for Hadoop using ansible.</p>
<p>You can check using the below output confirm.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729353383/2bc37721-536d-4dd3-b2b7-0971e3832cfc.png" alt /></p>
<p>You can find the above playbook on this GitHub repository. Bookmark or star it for future use.</p>
<p>[<strong>ShubhamRasal/ansible-playbooks</strong><br /><em>You can't perform that action at this time. You signed in with another tab or window. You signed out in another tab or…</em>github.com](https://github.com/ShubhamRasal/ansible-playbooks/blob/master/Docker/docker-configure.yml "https://github.com/ShubhamRasal/ansible-playbooks/blob/master/Docker/docker-configure.yml")<a target="_blank" href="https://github.com/ShubhamRasal/ansible-playbooks/blob/master/Docker/docker-configure.yml"></a></p>
<p>If you have any doubts or something improvement needed in this blog, please feel free to reach out to me on my <a target="_blank" href="https://www.linkedin.com/in/shubham-rasal/">LinkedIn account.</a></p>
<p>I hope you learned something new and find ansible more interesting.<br />Let me know your thoughts about ansible and how do plan to use ansible?</p>
<h3 id="heading-thank-you">Thank you.</h3>
<p><strong><em>About the writer:</em></strong><br /><em>Shubham loves technology, challenges, is open to learning and reinventing himself. He loves to share his knowledge. He is passionate about constant improvements.<br />He writes blogs about</em> <strong><em>Cloud Computing, Automation, DevOps, AWS, Infrastructure as code</em>.</strong><br /><a target="_blank" href="https://medium.com/@developer-shubham-rasal">Visit his Medium home page to read more insights from him.</a></p>
]]></content:encoded></item><item><title><![CDATA[Launching Docker Container Using Ansible]]></title><description><![CDATA[Ansible -2
Launch Apache webserver docker container using ansible
We are going to launch the apache webserver docker container on top of a managed node with the help of Ansible.
Before moving to the action let's take a look at what is docker, ansible...]]></description><link>https://blog.shubhcodes.tech/launching-docker-container-using-ansible-372bbdca9165</link><guid isPermaLink="true">https://blog.shubhcodes.tech/launching-docker-container-using-ansible-372bbdca9165</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Sun, 13 Dec 2020 21:03:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729730171/d497c29f-127c-4683-96aa-54452f986299.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-ansible-2">Ansible -2</h4>
<h4 id="heading-launch-apache-webserver-docker-container-using-ansible"><strong>Launch Apache webserver docker container using ansible</strong></h4>
<p>We are going to launch the apache webserver docker container on top of a managed node with the help of Ansible.</p>
<p>Before moving to the action let's take a look at what is docker, ansible, and apache server so we will get a little idea about these technologies and the need for this integration. so without waiting let's get into the introduction.</p>
<h3 id="heading-introduction">Introduction 🤓</h3>
<h4 id="heading-redhat-ansible">Redhat Ansible</h4>
<p>Ansible is an open-source automation tool by which we can automate all cloud provisioning, configuration management, application deployment,intra-service orchestration, and many more IT needs.</p>
<p>Ansible is an open-source community project sponsored by Red Hat, it’s the simplest way to automate IT. Ansible is the only automation language that can be used across <strong>entire IT teams</strong> from systems and network administrators to developers and managers.<br />You can read more about ansible in below article and how it helps in DevOps</p>
<p>[<strong>What is Ansible? How ansible is helping companies in automation?</strong><br /><em>Factories that failed to automate fell behind due to so much competition in the market. That’s why automation became…</em>developer-shubham-rasal.medium.com](https://developer-shubham-rasal.medium.com/what-is-ansible-acb573304acd "https://developer-shubham-rasal.medium.com/what-is-ansible-acb573304acd")<a target="_blank" href="https://developer-shubham-rasal.medium.com/what-is-ansible-acb573304acd"></a></p>
<h4 id="heading-docker">Docker 🐳</h4>
<p>Docker is an open-source project launched in 2013. It is a software platform for building applications based on containers — small and lightweight execution environment share make shared use of the operating system kernel but otherwise run in isolation from one another.</p>
<p><strong>Docker</strong> enables you to separate your applications from your infrastructure so you can deliver software quickly.</p>
<h4 id="heading-apache-webserver">Apache Webserver</h4>
<p>Apache is an open-source and free web server software that <strong>powers around 40% of websites</strong> around the world. The official name is <a target="_blank" href="https://httpd.apache.org/"><strong>Apache HTTP Server</strong></a>, and it’s maintained and developed by the Apache Software Foundation. It allows website owners to serve content on the web — hence the name “webserver.” It’s one of the oldest and most reliable web servers, with the first version released more than 20 years ago, in 1995.</p>
<h3 id="heading-problem-statement">Problem Statement 🤯</h3>
<p>Write an Ansible PlayBook that does the following operations in the managed nodes:</p>
<ol>
<li>Configure Docker.</li>
<li>Start and enable Docker services.</li>
<li>Pull the httpd (Apache) server image from the Docker Hub.</li>
<li>Run the docker container and expose it to the public.</li>
<li>Copy the html code in the document root directory and start the webserver.</li>
</ol>
<p>Now we have a clear idea of what we want to do…<br />So let’s see step by step how we are going to do this</p>
<h3 id="heading-action-mode">Action mode 🔥</h3>
<p>We will use ssh public key authentication to configure managed/target nodes. The motivation to use public-key authentication over simple passwords is security. Public key authentication provides cryptographic strength that even extremely long passwords can not offer. With SSH we also don’t need to remember long passwords.</p>
<p>So let’s create one ssh key using the below command for that…<br />Go to the .ssh folder</p>
<p><strong>$ cd .ssh</strong></p>
<p><strong>$ ssh-keygen -t rsa -b 4096</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729652468/bd506b66-1f1a-4c6a-b4d8-6941dfef987d.jpeg" alt /></p>
<p>Now it will generate two files private and public keys in the .ssh file.</p>
<p>Now we have to copy the public key to the managed node for ssh authentication. for that we will use the ssh-copy-id command or else you can use the SCP command to authorized files also.</p>
<p><strong>$ssh-copy-id -i ansible_key.pub username@managed_node_ip</strong></p>
<p>It will ask first the time user password and enter the password then you are ready to go. above command append your id to ssh authorized file of a managed node.</p>
<p>In my case, my managed node IP address is <strong>192.168.225.182.  
</strong>You can check IP address using <em>$ ifconfig</em> command</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729655007/68724d3e-8659-4ba7-8433-a474dc884dcd.jpeg" alt /></p>
<p>Now let’s test that we can connect manually to the managed node before moving to an ansible setup.</p>
<p><strong>$ ssh username@hostname</strong></p>
<p>using the above command you can test that we have successfully transferred the ssh public key or not. You can see that the managed node authorized_keys file has your generated public key.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729656535/39c90746-4354-4366-873d-e6ccb0084170.jpeg" alt /></p>
<p>Now we have set up ssh connection now we are ready to connect and configure the managed node using ansible.</p>
<p>Now add managed node IP to your inventory file.<br />I have created one group called ‘<em>docker</em>’ and inside that group, I have added managed node IP. You can see the in the below image.</p>
<p><strong>$ ansible docker --list-hosts</strong></p>
<p>The above command shows the IP address list in the group name that we have added to the inventory file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729658044/7cdbb6c0-d1b0-4133-9684-1f69ff8bb3b7.jpeg" alt /></p>
<p>Now let's run the first command to ensure that ansible has a proper connection with the managed node.</p>
<p><strong>$ ansible docker -m ping</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729659504/7291bb3e-e86c-452c-86df-63ea81647346.jpeg" alt /></p>
<p>If you see a green response and ping: pong then yes.. we are fully ready to start…</p>
<h4 id="heading-install-docker">Install Docker</h4>
<ul>
<li><strong>Configure yum repository</strong></li>
</ul>
<p>First, we have to add a yum repo for docker so we can download docker software from the yum repository.<br />Let’s check that the managed node has a yum repo for docker or not.</p>
<p>The below image is of the managed node command output. I will show both the screenshots of the managed node and controller node for better understanding.</p>
<p>You can see that it does not have any docker repo added.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729661683/b66d2b88-4905-42bd-b406-922b277d68d6.jpeg" alt /></p>
<p>Ansible has a yum_repository module by which we can add a yum repository in the managed node. But we want to configure yum only if the managed node is from the RedHat family.</p>
<p>Create an Ansible playbook with the below code with extension .yml</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729663225/9c883f8b-ea1a-4cc3-b507-3e7ed81f4d51.jpeg" alt /></p>
<p>You can see in the above image that I have added ‘docker’ in hosts that group name that we have added in the inventory file. we have added the above code it's time to run the above code.</p>
<p><strong>$ ansible-playbook docker-cofigure.yml</strong></p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*hIAChRVMUTljoRNdfQl-qA.jpeg" alt /></p>
<p>You can see the orange output means something has changed on the managed node. so we have configured the docker repo successfully. you can check in the below image.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729667611/1915c976-c273-48ee-8389-bd0a1bac87bb.png" alt /></p>
<h4 id="heading-install-docker-1"><strong>Install docker</strong></h4>
<p>Now we have configured the yum repo then we can easily download docker software using the package module in ansible.</p>
<p>Before this let’s check that on the managed node we have already installed docker or not. As you see we our managed node does not have docker software.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729670131/c5b26f61-5b8e-43e2-949d-67bff8166b3b.jpeg" alt /></p>
<p>To install software we can use the package module in ansible. so let's take the help of the package module to install docker. Add below code under yum repository task.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*S2zBLTBoaL5OEG7EYAbJTA.jpeg" alt /></p>
<p>after this time to run the playbook again and test.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729674595/9061fc8e-27ee-4d05-b5bc-b9cb1a149f2f.jpeg" alt /></p>
<p>Ansible has idempotent so you can check that it does not configure yum again. and it has changed something in docker task and returned with orange output that means it has successfully installed docker software for us. let’s go and check on the managed node.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729676850/c4acf975-331b-46ad-880f-6b3feb291b5a.jpeg" alt /></p>
<p>yes.. we have successfully installed docker on the managed node.</p>
<p>Now to perform an operation on docker we have to start the docker service. because it is by default inactive.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729679355/e8d76b4a-f477-4810-9d7b-83fbd8eb36c8.png" alt /></p>
<p>to start docker service we have a ‘service’ module in ansible. we will use the ‘service’ module to start the docker service.</p>
<p>add below code to our playbook file to start the docker service.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729681924/f3ab5acb-267f-477d-b412-a97b6f131bb5.jpeg" alt /></p>
<p>now let’s run the playbook again to see the effect.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*YB6P2Ls7IFFnzt05OWBpYQ.png" alt /></p>
<p>now we have started docker using ansible let’s confirm it using systemctl command on the managed node.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*-B7o7u5oDVmEo7yXZgURcA.png" alt /></p>
<p>Now we have installed docker and started docker successfully.. its time to launch the container… no we have to download docker SDK because it is a <a target="_blank" href="https://docs.ansible.com/ansible/latest/collections/community/general/docker_container_module.html">pre-requisite</a> for our next module</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*jDrdFte1LV-JCl0MijLZQg.jpeg" alt /></p>
<p>so let’s write the code to download docker SDK before launching the docker container. First, install python pip3 using the package and then download docker SDK using pip</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*e9UoDV9Jgs4EvtHL4lZ-0g.jpeg" alt /></p>
<p>After running these tasks using ansible-playbook docker-configure.yml</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729690834/d1999cd5-93c6-4730-a113-ba444c175008.jpeg" alt /></p>
<p>You can see in above image it has successfully downloaded docker SDK using pip. You can check it on the managed node using the pip list command.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729693534/168f182a-3764-498f-8bf5-3c7d409dd68b.png" alt /></p>
<p>Now finally finished with docker installation totally. now we can launch as many containers as we want…</p>
<p>but before that, we have to copy our source code of the website to the managed node.<br />we have to copy somewhere in the managed node but let's make it a little organized and create one directory for our project and then copy source code to that directory.</p>
<p>so let’s declare one variable in the playbook, or you can create another file for the variable and then imports the variable from that file. but for simplicity, I am keeping it into the single playbook. The benefit of declaring a variable for another project I don’t need to go and manually update code, Just update the value for the variable.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729696793/99fcf194-b842-4e49-a1fe-11d8dce2a6c4.png" alt /></p>
<p>Here “dev-creatorsbyheart” is my development version of the creatorsbyheart project. I want to also name this as a container name so I can differentiate between different environments of the project.<br />Let’s create a directory where we want to copy the source code.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*bOez4SpGSJBL33Xgqyr0QQ.jpeg" alt /></p>
<p>Let's confirm that the directory is not already created. As you can see at the current location there is no directory of the project name.</p>
<p>Add below code at the end of the playbook.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*Mvz3bxeYKGDSqwRRHzJ18Q.png" alt /></p>
<p>Here we are reading the value of user_dir from ansible_facts. In our case, it is /root as we have login using root. Then at that location, we will create a new directory of the project name using the ‘file’ module in ansible.<br />Let’s run it and check it has created is successfully or not.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729703634/ad2c20bf-0887-40de-aeeb-e4d9b3ae24c3.jpeg" alt /></p>
<p>Now that we have created the directory time to copy source code to the directory created on a managed node in the above step.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729705170/c459e589-8e58-4559-8a02-f37f70b46bc4.jpeg" alt /></p>
<p>for copying files/directory we have the ‘copy’ module in ansible. we will use that module to achieve our goal.<br />before that declare a variable for project files because the path varies for different projects.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729707489/39a92634-4dd2-4b56-9778-0624e4f0460c.png" alt /></p>
<p>Add below task for copy files.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729710992/f2b4ec2d-7ed8-4ecf-aa17-ac6c85ed9d50.png" alt /></p>
<p>Here we are copying files from the directory mentioned in the source_code_dir variable to the destination folder we created in the previous step.</p>
<p>Let’s run and check the ansible successfully copied to the proper destination or not.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729713750/9ba081f6-847e-4deb-ab71-459a92b90341.jpeg" alt /></p>
<p>Now the time is arrived to launch the container and deploy our website using the container. My need is to use apache webserver so I will use apache docker image. Before that let's check any container is running or not on the managed node.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729716234/629faea2-ebb8-4ba9-82cf-3ffd03f1a894.jpeg" alt /></p>
<p>No there is no container running now… so let’s run our webserver without wasting time…<br />Now there will be the time we have multiple docker containers and the port number will also vary. so create a new variable for the port number.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*hiUnswMaN6Le2AoqT5Muog.jpeg" alt /></p>
<p>and add the below task also for launching the docker container. I have used only the variable for port numbers you can customize as per your need.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729720170/42b63eb4-dffa-44e5-8cbc-fae3a7e0b31d.png" alt /></p>
<p>We are using an ansible ‘docker_container’ module for launching an httpd container. It will pull the image if it is not already downloaded in a managed node and launch the container. we are exposing its HTTP port and attaching the source code directory with the document root of the apache webserver. That way container can directly access files from the directory.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729721932/079c13e3-9fae-4fdd-bd9e-81b642383ad1.jpeg" alt /></p>
<p>Let’s check the container is running or not on managed not.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729724360/a638dcad-8f95-408d-b53e-06e7f62444c0.jpeg" alt /></p>
<p>Yes, it running and now we can connect to a webserver using managed node IP and the port we have mentioned in variables.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729726934/a6ecb718-367c-4d02-9377-56f93ff4ed12.jpeg" alt /></p>
<p>If you facing trouble with connecting to this URL, you might need to restart the firewall of a managed node or add port forwarding rules for the container.<br />In ansible we have ‘firewalld’ module for that.. you can use that as you need.</p>
<p>You can multiple things with ansible to know what we can achieve using Ansible I recommend you to read this blog.</p>
<p>[<strong>What is Ansible? How ansible is helping companies in automation?</strong><br /><em>Factories that failed to automate fell behind due to so much competition in the market. That’s why automation became…</em>developer-shubham-rasal.medium.com](https://developer-shubham-rasal.medium.com/what-is-ansible-acb573304acd "https://developer-shubham-rasal.medium.com/what-is-ansible-acb573304acd")<a target="_blank" href="https://developer-shubham-rasal.medium.com/what-is-ansible-acb573304acd"></a></p>
<p>You can find the above playbook on this GitHub repository. Bookmark or star it for future use.</p>
<p>[<strong>ShubhamRasal/ansible-playbooks</strong><br /><em>You can't perform that action at this time. You signed in with another tab or window. You signed out in another tab or…</em>github.com](https://github.com/ShubhamRasal/ansible-playbooks/blob/master/Docker/docker-configure.yml "https://github.com/ShubhamRasal/ansible-playbooks/blob/master/Docker/docker-configure.yml")<a target="_blank" href="https://github.com/ShubhamRasal/ansible-playbooks/blob/master/Docker/docker-configure.yml"></a></p>
<p>If you have any doubts or something improvement needed in this blog, please feel free to reach out to me on my <a target="_blank" href="https://www.linkedin.com/in/shubham-rasal/">LinkedIn account.</a></p>
<p>I hope you learned something new and find ansible more interesting.<br />Let me know your thoughts about ansible and how do plan to use ansible?</p>
<h4 id="heading-thank-you">Thank you.</h4>
<p><strong><em>About the writer:</em></strong><br /><em>Shubham loves technology, challenges, is open to learning and reinventing himself. He loves to share his knowledge. He is passionate about constant improvements.<br />He writes blogs about</em> <strong><em>Cloud Computing, Automation, DevOps, AWS, Infrastructure as code</em>.</strong><br /><a target="_blank" href="https://medium.com/@developer-shubham-rasal">Visit his Medium home page to read more insights from him.</a></p>
]]></content:encoded></item><item><title><![CDATA[Create High Availability Architecture with AWS CLI | Shubham Rasal]]></title><description><![CDATA[AWS -11
Creating cloud architecture for the web app with low latency for resources
We are going to create the above architecture using AWS CLI. Let’s discuss architecture and how we are going to do it?🔶The architecture includes-◼️ Launch EC2 instanc...]]></description><link>https://blog.shubhcodes.tech/create-high-availability-architecture-with-aws-cli-shubham-rasal-7236c75417ad</link><guid isPermaLink="true">https://blog.shubhcodes.tech/create-high-availability-architecture-with-aws-cli-shubham-rasal-7236c75417ad</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Thu, 10 Dec 2020 20:08:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729394130/e4dbc3bd-df76-4653-b72d-d55c86365e40.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-aws-11">AWS -11</h4>
<h4 id="heading-creating-cloud-architecture-for-the-web-app-with-low-latency-for-resources">Creating cloud architecture for the web app with low latency for resources</h4>
<p>We are going to create the above architecture using AWS CLI. Let’s discuss architecture and how we are going to do it?<br />🔶<strong>The architecture includes-</strong><br />◼️ Launch EC2 instance<br />◼ Create EBS volume<br />◼Mount EBS volume to EC2 instance<br />◼Create an S3 bucket<br />◼Upload static objects such as images, videos, or documents in the S3 bucket<br />◼Create a content delivery network (CDN) using aws CloudFront distribution for S3 bucket<br />◼️️ Configure instance as the webserver<br />◼Place CloudFront URL in an application for security and low latency<br />◼Deploy source code on EC2 instance webserver.</p>
<p>The final goal is to achieve all these tasks using AWS CLI without going to the website and manual clicks.</p>
<p>If you are very new to AWS CLI, I would like you to check out the basic introduction to AWS CLI</p>
<p>[<strong>What is AWS CLI? How to use AWS CLI?</strong><br /><em>Launch EC2 instance, create EBS and attach EBS volume to EC2 instance using aws CLI.</em>developer-shubham-rasal.medium.com](https://developer-shubham-rasal.medium.com/what-is-aws-cli-how-to-use-aws-cli-6f1bdedabd2b "https://developer-shubham-rasal.medium.com/what-is-aws-cli-how-to-use-aws-cli-6f1bdedabd2b")<a target="_blank" href="https://developer-shubham-rasal.medium.com/what-is-aws-cli-how-to-use-aws-cli-6f1bdedabd2b"></a></p>
<p>[<strong>Deploying Angular App to AWS S3 with CloudFront using AWS CLI</strong><br /><em>Deploy angular website on aws using aws CLI</em>developer-shbham-rasal.medium.com](https://developer-shubham-rasal.medium.com/deploying-angular-app-to-aws-s3-with-cloudfront-using-aws-cli-ace33350a950 "https://developer-shubham-rasal.medium.com/deploying-angular-app-to-aws-s3-with-cloudfront-using-aws-cli-ace33350a950")<a target="_blank" href="https://developer-shubham-rasal.medium.com/deploying-angular-app-to-aws-s3-with-cloudfront-using-aws-cli-ace33350a950"></a></p>
<h3 id="heading-action-mode">Action Mode 🔥</h3>
<p>I am assuming that you have created an IAM user that has EC2, CloudFront, and IAM access and configured AWS CLI.</p>
<p><strong>Create Key Pair</strong></p>
<p>To create keypair we have to use the ec2 service of AWS. by using</p>
<p><strong><em>$ aws ec2 help</em></strong></p>
<p>you will see all the subcommands under ec2.</p>
<p>To create a key pair and save it in the proper format(.pem), use the below command.</p>
<p>Note: Windows users use PowerShell for the above commands. To activate PowerShell on normal cmd. Enter <em>$ PowerShell</em> command. OR <em>Win + R</em> and enter <em>PowerShell</em> to open Powershell.</p>
<p><strong>Create a Security Group</strong></p>
<p>You may want to create a security group in a specific VPC. For that, we need a vpc id. Let’s find the VPC id using AWS CLI.</p>
<p>the above command will return the list of VPC’s and tags associated with VPC.<br />copy the vpc id you want to create SG in and save it somewhere.<br />Now let's create a security group.</p>
<p>Don't forget to edit the command according to your need. You can change your name, description, and vpc id.</p>
<p><strong>Create security group rules</strong></p>
<p>we want to add rules to the security group that we have created above.<br />If you copy-paste the security group id then skip the below command...<br />Let's find the security group name and id.</p>
<p>the above command will give you a list of security group names and ids.<br />copy the id that we want to add a new rule and paste it in the below command.</p>
<p>the above command will create a new rule for ssh.. you can always customize as per your need... you know that right?<br />You can check more examples of multiple rules <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/reference/ec2/authorize-security-group-ingress.html#examples">here...</a></p>
<p>Check that rule added successfully or not using describe subcommand.</p>
<p><strong>Create a new instance</strong></p>
<p>You may want to launch an instance in a specific availability zone. let’s see with AZ are available in the region you specified while configuring.</p>
<p>We want to attach one more EBSvolume to our instance and for that, both should in the same availability zone. Fix your availability zone for that.<br />AWS CLI has very well described documentation for ec2.. let's take the help of that. It will give more examples and necessary information. You always need something different than I used here.. so go and use</p>
<p><strong><em>$ aws ec2 help</em></strong></p>
<p>To launch an instance we need ami id of the image that we want to launch..</p>
<p>$ <strong>aws ec2 describe-images</strong></p>
<p>I have used the key pair and security group that we have created above. I choose the ap-south-1a availability zone and amazon Linux ami to launch this instance.<br />You can do much more customization to this command. Check this aws Documentation.<a target="_blank" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html">(open)</a><br />copy instance id and save it somewhere.. we will see later where we need.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729362634/874bdef6-4f60-4c8a-9b48-0837d9eebe3f.jpeg" alt /></p>
<p>run instance output</p>
<p><strong>Create EBS volume</strong></p>
<p>Our goal is to create a new volume and attach it to an instance that we have just created. So we need to create it in the same Availability Zone.</p>
<p>Update the above command as per your need. (size in GiBs).<br />copy volume id and save it somewhere.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729365504/a81c768f-629a-4a54-9d0e-42dc97986a62.jpeg" alt /></p>
<p><strong>Attach EBS volume to the instance.</strong></p>
<p>update the instance id and volume id in the above command that we have copied and save.. remember?</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729367183/55455a86-8fe7-4148-a0d7-160b2beb0243.jpeg" alt /></p>
<h3 id="heading-create-an-s3-bucket">Create an S3 bucket</h3>
<p>What is S3? I mean what google says…<br /><em>Amazon S3 or Amazon Simple Storage Service is a service offered by Amazon Web Services that provides object storage through a web service interface.</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729369858/aea62aa1-d294-482e-b29a-7c78a42deede.jpeg" alt /></p>
<p>here we are using s3 APIs to create a new bucket with the name dev-creatorsbyheart.com</p>
<p><img src="https://cdn-images-1.medium.com/max/1200/1*0tAIeOhkD6Q9lp3xmQhy2w.jpeg" alt /></p>
<p>Update the s3 bucket policy so we can access it publically.</p>
<ul>
<li><strong>Create a policy</strong></li>
</ul>
<p>Create a new file and give the name s3_bucket_policy.txt and paste the below code. Don’t forget to change the bucket name in the “<strong><em>Resource</em></strong>” attribute</p>
<p>update the bucket policy using the about command and make objects available to the public for getting action.</p>
<p><img src="https://cdn-images-1.medium.com/max/1200/1*Sd1-dOC06lCa2X64hEsUAA.jpeg" alt /></p>
<p>Upload files to s3 bucket using s3 command</p>
<p>Check the image is visible or not using the image URL. <a target="_blank" href="https://s3.amazonaws.com/dev-creatorsbyheart.com/aws_image.jpg">https://s3.amazonaws.com//&lt;</a>object&gt;</p>
<p>If it is working all then it is time to create CloudFront for s3</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729375889/da5e15ea-5111-49f7-acef-b2edeb93344e.jpeg" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729377419/d34ba705-0968-4118-8984-5cf74cbc5106.jpeg" alt /></p>
<h3 id="heading-create-cloudfront-distribution-for-s3-bucket">Create CloudFront distribution for S3 bucket</h3>
<p>What is CloudFront? This is what google says…</p>
<p><em>Amazon CloudFront is a content delivery network offered by Amazon Web Services. Content delivery networks provide a globally-distributed network of proxy servers that cache content, such as web videos or other bulky media, more locally to consumers, thus improving access speed for downloading the content.</em></p>
<ul>
<li>Create a new file containing the below code and name it cf_config.json</li>
</ul>
<p>create CloudFront with the above configuration. Update the above file and replace a bucket name in the target origin name.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729379919/30292a47-931c-46d4-b640-cbfcdb891315.jpeg" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729382659/6f8849c2-9a4a-4d87-92dc-59690cce99d4.jpeg" alt /></p>
<p>It will create CloudFront distribution for the s3 bucket. now we can also access s3 objects using the CloudFront URL.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729385202/6aa075ec-365c-4917-bd78-93b65f143023.jpeg" alt /></p>
<p><strong>If you are with me till here… give a pat on the back… good job.</strong></p>
<p>Give this URL to the developer so they can use this URL to show resources in the source code. Using CloudFront URL resources can load fastly and we can achieve less latency.</p>
<p>Now you can do ssh and configure the instance as a web server and mount EBS volume to document root and deploy source code.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729388609/b1861fa9-515e-4d39-b0b5-924f5b62bd73.png" alt /></p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*4TClUEBzukHTHZFNrsCVIw.png" alt /></p>
<p>I hope you learned something new and find aws CLI interesting.<br />Let me know your thoughts about this article and how do plan to use the aws CLI tool?</p>
<p>Thank you</p>
<h4 id="heading-about-the-writer">About the writer:</h4>
<p>Shubham loves technology, challenges, is open to learning and reinventing himself. He loves to share his knowledge. He is passionate about constant improvements.<br />Visit his Medium home page to read more insights from him.</p>
]]></content:encoded></item><item><title><![CDATA[What is Ansible ? How ansible is helping companies in automation ?]]></title><description><![CDATA[Ansible 1:
How we can use Ansible to automate IT and DevOps?

Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs...]]></description><link>https://blog.shubhcodes.tech/what-is-ansible-acb573304acd</link><guid isPermaLink="true">https://blog.shubhcodes.tech/what-is-ansible-acb573304acd</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Tue, 01 Dec 2020 11:44:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728964552/69b6b59e-03ca-47ef-bfb5-4a28c34d7ec3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-ansible-1">Ansible 1:</h4>
<h4 id="heading-how-we-can-use-ansible-to-automate-it-and-devops">How we can use Ansible to automate IT and DevOps?</h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728949270/8341557a-d13e-47af-8e32-0161dd85849a.png" alt /></p>
<p>Ansible is a radically simple IT automation engine that automates <a target="_blank" href="https://www.ansible.com/provisioning?hsLang=en-us">cloud provisioning</a>, <a target="_blank" href="https://www.ansible.com/configuration-management?hsLang=en-us">configuration management</a>, <a target="_blank" href="https://www.ansible.com/application-deployment?hsLang=en-us">application deployment</a>, <a target="_blank" href="https://www.ansible.com/orchestration?hsLang=en-us">intra-service orchestration</a>, and many other IT needs.</p>
<p><strong>Ansible</strong> is an absolutely <strong>free</strong> and open-source tool that is used for the above-mentioned purposes.</p>
<p><strong><em>Designed for multi-tier deployments since day one, Ansible models your IT infrastructure by describing how all of your systems inter-relate, rather than just managing one system at a time.</em></strong></p>
<p>It uses no agents and no additional custom security infrastructure, so it’s easy to deploy — and most importantly, it uses a very simple language (YAML, in the form of Ansible Playbooks) that allow you to describe your automation jobs in a way that approaches plain English.</p>
<h3 id="heading-what-do-we-can-automate-using-ansible">What do we can automate using ansible?</h3>
<p>Ansible is an open-source community project sponsored by Red Hat, it’s the simplest way to automate IT. Ansible is the only automation language that can be used across <strong>entire IT teams</strong> from systems and network administrators to developers and managers.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728951693/6516f320-6fb5-46f2-956f-5d941d119a42.png" alt /></p>
<p>Ansible functionality addresses key security and compliance use cases for service providers, including:</p>
<ul>
<li>Standardizing and enforcing auditing and compliance.</li>
<li>Maintaining firewall compliance.</li>
<li>Centralizing all system-related changes.</li>
<li>Upgrading driver and firmware.</li>
<li>Automating remediation and patching of system vulnerabilities, such as Wannacry, <a target="_blank" href="https://www.redhat.com/en/blog/what-are-meltdown-and-spectre-heres-what-you-need-know">Spectre, and Meltdown</a>.</li>
<li>Detecting system vulnerabilities and needed remediation by gathering Ansible facts and system event logs and exporting the information to system monitoring tools.</li>
</ul>
<h3 id="heading-ansible-for-devops"><strong>Ansible for DevOps</strong></h3>
<p>Automation has transformed factories. It gave manufacturing the ability to perform work faster, more efficiently, at a higher quality. Factories increased productivity as well as quality.</p>
<p>Factories that failed to automate fell behind due to so much competition in the market. That's why automation became essential for businesses to sustain.</p>
<p>IT departments are the modern factories powering today’s digital businesses. And just as today’s factories can’t compete without automation, automation will soon become imperative for IT organizations. because</p>
<ol>
<li>Application delivery is fuel for growth</li>
<li>Automation simplifies process and automation never sleeps</li>
<li>Don’t repeat the same task over and over</li>
<li>Speed up your workflow</li>
</ol>
<p><strong>You need a tool that can act as the glue layer automating across services and applications no matter where they are. Once one person on your team learns how to do something, they can capture their solution in an Ansible Playbook and enable everyone to use it</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728954839/8cc3260b-b8cf-4f03-a5aa-910fc75f953e.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728957222/a7723185-5053-4e36-830a-e45afbbe109d.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728959431/7e225636-10af-4eb4-a5c7-0a07f6c0c683.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728961929/01a4955a-409d-48ce-9479-3c4936a463d1.png" alt /></p>
<p>Simple to adopt, simple to use, simple to understand — Ansible is designed around the way people work and the way people work together across the entire organization:</p>
<h4 id="heading-1-dev">1. Dev</h4>
<p><strong>Challenge:</strong> Dev spending too much time focusing on tooling required to deliver capabilities and not enough time focusing on results</p>
<p><strong>Need:</strong> To respond and scale in pace with demand</p>
<p><strong>How does Ansible automation help?</strong></p>
<ul>
<li>Accelerates feedback loop</li>
<li>Discover bugs sooner</li>
<li>Reduces risk of tribal knowledge</li>
<li>Faster, coordinated, and more reliable deployments</li>
</ul>
<h4 id="heading-2-ops">2. Ops</h4>
<p><strong>Challenge:</strong> Need technology that can be used across many different groups with many different skill sets</p>
<p><strong>Need:</strong> Centrally govern and monitor disparate systems and workloads</p>
<p><strong>How does Ansible automation help?</strong></p>
<ul>
<li>Reduce shadow IT</li>
<li>Reduce deployment time</li>
<li>Provision systems faster</li>
<li>Reduce risk of tribal knowledge</li>
<li>Deploy automated patching</li>
</ul>
<h4 id="heading-3-qasecurity">3. QA/Security</h4>
<p><strong>Challenge:</strong> Tracking of what changed where and when</p>
<p><strong>Need:</strong> Reduce risk of human error</p>
<p><strong>How does Ansible automation help?</strong></p>
<ul>
<li>Establish identical QA, Dev, and Prod environments for faster, coordinated, and more reliable deployments</li>
<li>Establish security baselines</li>
<li>Increase visibility and accuracy for compliance requirements</li>
<li>Relieve the burden of traditional documentation by creating living, testable documentation</li>
</ul>
<h4 id="heading-3-business">3. Business</h4>
<p><strong>Challenge:</strong> Getting to market faster</p>
<p><strong>Need:</strong> Create a competitive advantage</p>
<p><strong>How does Ansible automation help?</strong></p>
<ul>
<li>Align IT with the business</li>
<li>Increase time for innovation and strategy</li>
<li>Reduce costs of onboarding new team members</li>
<li>Increase cross-team collaboration</li>
</ul>
<p>I hope you learned something new and find ansible interesting.<br />Let me know your thoughts about ansible and how do plan to use ansible?</p>
<p>Thank you.</p>
<p>About the writer:<br />Shubham loves technology, challenges, is open to learning and reinventing himself. He loves to share his knowledge. He is passionate about constant improvements.<br />Visit his Medium home page to read more insights from him.</p>
]]></content:encoded></item><item><title><![CDATA[How ISRO uses Machine learning?]]></title><description><![CDATA[ML-1
What is AI?
Artificial intelligence (AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans.AI systems will typically demonstrate at least some of the following behaviors associated with human intellig...]]></description><link>https://blog.shubhcodes.tech/how-isro-uses-machine-learning-25be23430713</link><guid isPermaLink="true">https://blog.shubhcodes.tech/how-isro-uses-machine-learning-25be23430713</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Tue, 20 Oct 2020 12:51:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728973353/13cebb66-590b-408e-8bbe-a3236af41360.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728973353/13cebb66-590b-408e-8bbe-a3236af41360.png" alt /></p>
<h4 id="heading-ml-1">ML-1</h4>
<h3 id="heading-what-is-ai">What is AI?</h3>
<p>Artificial intelligence (AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans.AI systems will typically demonstrate at least some of the following behaviors associated with human intelligence: planning, learning, reasoning, problem-solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.</p>
<p><strong>Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions.</strong></p>
<h3 id="heading-what-is-machine-learning">What is Machine Learning?</h3>
<p><strong>Machine learning</strong> is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. <strong>Machine learning</strong> focuses on the development of computer programs that can access data and use it to learn for themselves.<br />Huge historical data is feed to the machine learning model and by that machine learning does the prediction.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728976575/c5b82b20-7759-4c0a-b918-a201f3f2d7a6.jpeg" alt /></p>
<h3 id="heading-how-isro-uses-artificial-intelligence-and-machine-learning">How ISRO uses Artificial Intelligence and Machine Learning?</h3>
<h4 id="heading-chandrayaan-2-ai-powered-pragyan-rover">➤Chandrayaan 2: AI-powered ‘Pragyan’ Rover</h4>
<ul>
<li>On 22 July 2019, ISRO launched Chandrayaan 2 spacecraft into an earth orbit as part of the second lunar mission.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728978698/ab985dfb-9f50-4d12-abc7-ce3c832ba333.jpeg" alt /></p>
<ul>
<li>In Chandrayaan 2, the Pragyan rover was AI-powered which can communicate only with the Lander, includes a piece of motion technology developed by IIT-Kanpur researchers that will help the rover manoeuvre on the surface of the moon and aid in landing. The algorithm will help the rover trace water and other minerals on the lunar surface, and also send pictures for research and examination.</li>
<li>The rover is a six-wheeled robotic vehicle and is capable of conduct in-situ payload experiments. It is powered by AI tools and frameworks, uses solar energy for its functioning, and can communicate only with the Lander. The Pragyan Rover payloads consist of Alpha Particle X-ray Spectrometer (APXS) and Laser Induced Breakdown Spectroscope (LIBS).</li>
</ul>
<p><strong>Chandrayaan-2 Pragyan shows how AI is helping space exploration.</strong></p>
<h4 id="heading-multi-object-tracking-radar-sdsc-shar">➤Multi Object Tracking Radar (SDSC-SHAR)</h4>
<ul>
<li>The challenge of building a Space object tracking solution to build successful sustenance of satellites through the difficult terrain of open space with millions of unknown objects that could impact every ISRO sponsored mission.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728981063/9bff3494-490d-4e8b-b116-c7ab55082549.jpeg" alt /></p>
<ul>
<li>The objective is to build Multi Object Tracking Radar. <strong><em>ISRO first developed Target identification using machine learning algorithms from MOTR radar data.</em></strong></li>
<li>Radar data consists of Range, Azimuth, Elevation and Signal to Noise Ratio (SNR). From Range and SNR correlation target size can be classified. From SNR variation alone in a single-track duration, target nature can be established. Using <strong>Machine Learning algorithms</strong>, a model should be trained on radar tracked data (Range, Azimuth, Elevation and SNR). The trained model should identify a target nature (controlled or uncontrolled) and size. <strong>Using standard libraries in Python Machine Learning Algorithms have become realizable models.</strong></li>
</ul>
<h4 id="heading-image-processing-and-pattern-recognition-iirs">➤Image Processing and Pattern Recognition (IIRS)</h4>
<p><strong>In 1980s and 1990s ISRO’s the challenge was to build efficient and cost neutral image processing and pattern recognition solution for upcoming missions for next decade. Hence, Unmanned Image Processing and Pattern Recognition (IIRS).</strong></p>
<ul>
<li>ISRO leveraged <strong>Artificial Neural networks (ANN)</strong> which is the generic name for a large class of machine learning algorithms, most of them are trained with an algorithm called <strong>back propagation</strong>. ISRO’s team used various path to explore various <strong>deep learning</strong> algorithms in various applications of earth observation data like; self-learning based classification, prediction, multi-sensor temporal data in crop/forest species identification, remote sensing time series data analysis.</li>
</ul>
<h4 id="heading-ai-enabled-monitoring-system-for-forest-conservation">➤AI-enabled monitoring system for forest conservation</h4>
<ul>
<li>The National Remote Sensing Centre (NRSC), which ISRO has designed and developed, is a monitoring system to observe forest cover change and combat deforestation by leveraging optical remote sensing, geographic information system, AI, and automation technologies.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728984232/318e0335-db98-4425-91c5-6f6e2b4657ff.jpeg" alt /></p>
<ul>
<li>ISRO creates a machine learning model that checks the imagery and detect small-scale deforestation and improve the frequency of reporting.</li>
<li>It also enables scientists to process satellite imagery faster and reduces the time frame for new reports from one year to one month. NSRC aims at preventing negative changes in the green cover and protection of wildlife.</li>
<li>The NRSC technology makes it possible for monitoring forest cover changes over small areas of one hectare by improving the resolution from 50 meters to 30 meters through optical remote sensing which provides insights into the smallest of deforestation activity.</li>
</ul>
<h4 id="heading-autonomously-navigating-robot-for-space-mission-iisu">➤Autonomously Navigating Robot for Space Mission (IISU)</h4>
<ul>
<li><strong>ISRO’s challenge was to build and send unmanned robots to help fetch critical space information in multiple missions throughout the year.</strong></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728986351/3781b9a4-55bb-4591-afb9-e68dee25f231.jpeg" alt /></p>
<ul>
<li>A half Vyomnoid with Sensing and perception of surroundings with 3D vision and Dexterous manipulative abilities to carry out defined crew functions in an unmanned mission or assist crew in manned missions.</li>
<li>Design &amp; Realization of FULL Vyomnoid with features that include full autonomy with 3D vision, dynamically controlled movement in zero ‘g’, <strong>Artificial Intelligence / Machine Learning</strong> enabled real time decision making with vision optimization and path planning algorithms.</li>
<li>ISRO leveraged Artificial Intelligence enabled Path Navigation algorithms to solve this.</li>
</ul>
<h4 id="heading-more">➤MORE:</h4>
<ul>
<li>SRO has advised scientists and researchers to focus on building a generalized parameter extraction software based on artificial neural network (ANN) learning methods that utilize multidimensional approximation of <strong>ANN</strong> to map characteristics of microwave filters. A communication satellite contains a <strong>large number of microwave filters</strong> that are required to undergo extensive tuning after fabrication.</li>
<li>ISRO is currently planning to develop high-end propulsion technology to ensure cost-effective re-usable, recoverable, re-startable and reliable space launches with AI-based sensors equipped in propellants.</li>
</ul>
<h4 id="heading-sources"><strong>Sources:</strong></h4>
<ul>
<li>ISRO Research paper: (o<a target="_blank" href="https://www.isro.gov.in/sites/default/files/article-files/research-and-academia-interface/supported-areas-of-research/research_areas_in_space.pdf">pen</a>)</li>
<li>indiaai.gov.in (<a target="_blank" href="https://indiaai.gov.in/">open</a>)</li>
<li>Mint.com (<a target="_blank" href="https://www.livemint.com/technology/tech-news/chandrayaan-2-pragyan-shows-how-ai-is-helping-space-exploration-1567764065716.html">open</a>)</li>
<li>un-spider.org. (<a target="_blank" href="https://un-spider.org/news-and-events/news/new-monitoring-system-strengthens-forest-conservation-india">open</a>)</li>
<li>sac.gov.in (<a target="_blank" href="https://www.sac.gov.in/respond/doc-pdf/Research-Areas-of-SAC.pdf">open</a>)</li>
</ul>
<p><em>So this was an Overview of How ISRO is solving problems in every way possible because of AI and how our lives are evolving and getting better day by day with the help of AI, I hope you found this blog helpful….Your feedback is valuable and it will help me improve… clap if you like it.</em></p>
]]></content:encoded></item><item><title><![CDATA[Deploying Angular App to AWS S3 with CloudFront using AWS CLI]]></title><description><![CDATA[aws -10
Deploy angular website on aws using aws CLI
Introduction 🤩
In this blog, we will see how to deploy the angular app on AWS fastest way using the AWS CLI tool. As you came here. so I assume that you are aware of AWS and Angular. so without was...]]></description><link>https://blog.shubhcodes.tech/deploying-angular-app-to-aws-s3-with-cloudfront-using-aws-cli-ace33350a950</link><guid isPermaLink="true">https://blog.shubhcodes.tech/deploying-angular-app-to-aws-s3-with-cloudfront-using-aws-cli-ace33350a950</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Sun, 18 Oct 2020 18:12:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729420396/f07e3f79-bb47-407e-91c1-3715b9f7f342.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-aws-10">aws -10</h4>
<h4 id="heading-deploy-angular-website-on-aws-using-aws-cli">Deploy angular website on aws using aws CLI</h4>
<h4 id="heading-introduction">Introduction 🤩</h4>
<p>In this blog, we will see how to deploy the angular app on AWS fastest way using the AWS CLI tool. As you came here. so I assume that you are aware of AWS and Angular. so without wasting time on theory let's get into action.</p>
<h4 id="heading-lets-plan-before-the-real-action">Let’s plan before the real action</h4>
<ul>
<li>Create an S3 bucket</li>
<li>Create and update permission of bucket and make it public</li>
<li>Start static website hosting service of S3</li>
<li>Create CloudFront distribution for S3 bucket</li>
<li>Create an Angular project</li>
<li>Upload your code and assets to S3</li>
<li>Invalidate the CloudFront</li>
<li>Get CloudFront URL</li>
</ul>
<p>Its simple isn’t it? yes… let's start then</p>
<h4 id="heading-assumption"><strong>Assumption</strong></h4>
<p>I am assuming that you have read this blog and configure aws CLI.</p>
<p>[<strong>What is AWS CLI? How to use AWS CLI?</strong><br /><em>Launch EC2 instance, create EBS and attach EBS volume to EC2 instance using aws CLI.</em>medium.com](https://medium.com/@developer.shubham.rasal/what-is-aws-cli-how-to-use-aws-cli-6f1bdedabd2b "https://medium.com/@developer.shubham.rasal/what-is-aws-cli-how-to-use-aws-cli-6f1bdedabd2b")<a target="_blank" href="https://medium.com/@developer.shubham.rasal/what-is-aws-cli-how-to-use-aws-cli-6f1bdedabd2b"></a></p>
<h3 id="heading-action-mode">Action Mode 🔥</h3>
<p>What is S3? I mean what google says...<br /><em>Amazon S3 or Amazon Simple Storage Service is a service offered by Amazon Web Services that provides object storage through a web service interface.</em></p>
<h4 id="heading-create-an-s3-bucket"><strong>Create an S3 bucket</strong></h4>
<p>bucket name should be unique in your region so try to be a little creative and give the region you want to create a bucket.</p>
<h4 id="heading-create-and-update-permission-of-bucket-and-make-it-public">Create and update permission of bucket and make it public</h4>
<ul>
<li><strong>Create a policy</strong></li>
</ul>
<p>Create a new file and give the name s3_bucket_policy.txt and paste the below code. Don’t forget to change the bucket name in the “<strong><em>Resource</em></strong>” attribute</p>
<ul>
<li><strong>Update bucket policy</strong></li>
</ul>
<p>Now that we have created the policy we want to update the s3 bucket policy.</p>
<p>Check that the file exists in the current folder otherwise provide the full path.</p>
<h4 id="heading-start-static-website-hosting-service-of-s3">Start static website hosting service of S3</h4>
<ul>
<li>Create one new file with the below code and save it with the name ‘website_configuration.txt’. Using this we want to enable and configure some setting of the S3 bucket</li>
</ul>
<p>Update the error document and index document which you want to set..<br />In my case, I set index.html for both. but you may have your error page.</p>
<ul>
<li><strong>Enable Static website hosting</strong></li>
</ul>
<h4 id="heading-create-cloudfront-distribution-for-s3-bucket">Create CloudFront distribution for S3 bucket 🤯</h4>
<p>Here comes the interesting part.</p>
<p>What is CloudFront? This is what google says…</p>
<p><em>Amazon CloudFront is a content delivery network offered by Amazon Web Services. Content delivery networks provide a globally-distributed network of proxy servers that cache content, such as web videos or other bulky media, more locally to consumers, thus improving access speed for downloading the content.</em></p>
<ul>
<li>Create a new file containing the below code and name it cf_config.json</li>
</ul>
<p>If you want more customization aws CLI to provide skeleton you can use that.</p>
<p>The above will create a new file cf_config.json file with the empty skeleton. You can add the details and you are ready…<br />I also used this approach.</p>
<p>I have used the S3 bucket name in DomainName.<br />Custom error page if forbidden and bad request. you can add more according to your need.<br />To know in detail what I did aws has very well described documentation.</p>
<p><strong>$ aws cloudfront create-distribution help</strong></p>
<ul>
<li><strong>Create CloudFront distribution</strong></li>
</ul>
<p>after successful, it will give you details about the CloudFront.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729401410/1ee4b84f-5c0f-453a-8671-1f5f05021bdd.jpeg" alt /></p>
<p>Note: If you create a file using the above command it has the “DistributionConfig” attribute and extra braces remove it and make it as the above code.<br />Error looks like</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729403246/658c0319-38d7-4025-b1ea-7b2e13425248.png" alt /></p>
<h4 id="heading-if-you-come-here-you-deserve-a-pat-on-the-back">if you come here… you deserve a pat on the back 👏</h4>
<h4 id="heading-create-an-angular-project"><strong>Create an Angular project.</strong></h4>
<p>I assume that you have an angular project but if not let's create.</p>
<p><strong>$ ng new </strong></p>
<p><strong>Create a build for deploy</strong></p>
<p>$ ng build --prod</p>
<p>It will create build its output path. In my case dist folder in the root folder.</p>
<h4 id="heading-upload-your-code-and-assets-to-s3">Upload your code and assets to S3</h4>
<p>Time to upload code to s3 bucket. for that, we will use the s3 subcommand of AWS.<br />Update your folder name and bucket name in the below command.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729405218/72b218f6-7ef0-40ea-ae80-d4ad4d4b7836.png" alt /></p>
<p>the above command will copy all the files excluding ‘.svg’ images.<br /><strong>Now copy the .svg files to our S3 bucket.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729407692/3fd83bea-902f-4696-a3ff-044253ddd1db.png" alt /></p>
<p>You may wonder why two different commands to copy the files from the same folder .. why not use one command and copy all the files in one go... right?<br />If we copy all the files in one command then the .svg files do not load appears on the website. for that, we have to add a content type flag while copying SVG files.</p>
<h4 id="heading-invalidate-the-cloudfront">Invalidate the CloudFront</h4>
<p>As you know that what is CloudFront. It saves cache at edge locations. so if you are uploading files in again and again in the S3 like through CICD. you need to invalidate the CloudFront.<br /><strong>By Invalidating</strong> the files from edge caches. The next time a viewer requests the file, <strong>CloudFront</strong> returns to the origin to fetch the latest version of the file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729411161/dd7fe052-337b-4be7-b87f-53adad1f9951.png" alt /></p>
<h4 id="heading-get-cloudfront-url">Get CloudFront URL</h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729413616/c11aac95-305c-4522-a6e1-9bd645bc0d20.png" alt /></p>
<p>There are multiple ways to parse and get the output. I like to keep it simple.<br />for more details, you can check this article.</p>
<p>[<strong>Controlling command output from the AWS CLI</strong><br /><em>This topic describes the different ways to control the output from the AWS Command Line Interface (AWS CLI). The AWS…</em>docs.aws.amazon.com](https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-output.html "https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-output.html")<a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-output.html"></a></p>
<p>Copy the CloudFront URL and paste it into the browser tab. let's hope for the best…..</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729417477/f7829e10-7d00-48c6-992f-362f22be83c4.png" alt /></p>
<h4 id="heading-congratulations-it-is-working-superbly-now-you-deserve-a-break-go-and-grab-a-cup-of-coffee">Congratulations...😻 It is working superbly. Now you deserve a break... go and grab a cup of coffee ☕️</h4>
<h4 id="heading-next-steps">Next steps</h4>
<ul>
<li><a target="_blank" href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-cloudfront-distribution.html">Routing traffic to an Amazon CloudFront web distribution by using your domain name</a>.</li>
<li>Implement CICD for an angular project for multiple environments.</li>
<li>Update CloudFront and S3 using AWS CLI</li>
</ul>
<h3 id="heading-wrapping-up">Wrapping Up 🥳</h3>
<p>We have successfully deployed our angular app on aws S3 with Cloudfront.<br />You may think that we have invested lots of time in finding the attributes and values for our needs. we can do all this in 15 mins using the management console so why use aws CLI? It is a one-time investment and it will save a lot of time when you won't do all the steps for different projects or for different configurations like dev, stag,e, and prod. Few changes can save you many minutes and avoid the human errors that we tend to do.</p>
<h4 id="heading-i-would-like-to-hear-your-opinion-and-here-about-how-you-deploy-apps-using-aws-if-you-like-this-blog-share-it-with-your-friends-and-colleagues-thank-you-keep-working">I would like to hear your opinion and here about how you deploy apps using aws. ️😊 If you like this blog share it with your friends and colleagues. Thank you... keep working…</h4>
]]></content:encoded></item><item><title><![CDATA[What is AWS CLI ? How to use AWS CLI ?]]></title><description><![CDATA[AWS CLI
AWS-9
Launch EC2 instance, create EBS and attach EBS volume to EC2 instance using aws CLI.
Introduction 🤓
AWS is a market leader and top innovator in the field of Cloud Computing. AWS has integrated building blocks that support any applicati...]]></description><link>https://blog.shubhcodes.tech/what-is-aws-cli-how-to-use-aws-cli-6f1bdedabd2b</link><guid isPermaLink="true">https://blog.shubhcodes.tech/what-is-aws-cli-how-to-use-aws-cli-6f1bdedabd2b</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Sat, 17 Oct 2020 12:42:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729428469/18382182-1c94-4c24-af88-69caa7a15ea4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AWS CLI</p>
<h4 id="heading-aws-9"><strong>AWS-9</strong></h4>
<h4 id="heading-launch-ec2-instance-create-ebs-and-attach-ebs-volume-to-ec2-instance-using-aws-cli">Launch EC2 instance, create EBS and attach EBS volume to EC2 instance using aws CLI.</h4>
<h3 id="heading-introduction">Introduction 🤓</h3>
<p>AWS is a market leader and top innovator in the field of Cloud Computing. AWS has integrated building blocks that support any application architecture, regardless of scale, load, or complexity.<br />But AWS is more than beautiful and eye-catching AWS Web UI console. Let's discover the Amazon’s Command Line Interface — AWS CLI.</p>
<h4 id="heading-what-is-aws-cli">What is AWS CLI? ❤️</h4>
<p><em>The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.</em></p>
<p>We can achieve more speed and customization through AWS CLI than the AWS management console. It improves the convenience and productivity of DevOps engineers and developers.</p>
<h4 id="heading-installing-aws-cli">Installing AWS CLI ‍💻</h4>
<p>There are two versions of AWS CLI.</p>
<ul>
<li><strong>Version 1.x</strong><br />The previous version of the AWS CLI is available for backward compatibility.</li>
<li><strong>Version 2.x  
</strong>It is the current version. It has more features than version 1</li>
</ul>
<p>To install AWS CLI, there are multiple ways you can check that here. <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html">(Open)</a>. But I will show my favorite way to download and install AWS CLI.</p>
<p>#install awscli using pip tool<br /><strong>$ pip install awscli --upgrade --user</strong></p>
<p>#check installation<br /><strong>$ aws --version</strong></p>
<p>Note: If you don't have a pip tool then download get-pip.py from <a target="_blank" href="https://bootstrap.pypa.io/">https://bootstrap.pypa.io/</a>. and run this file using python.<br />$python get-pip.py<br />for more details <a target="_blank" href="https://phoenixnap.com/kb/install-pip-windows">(open)</a></p>
<h4 id="heading-configure-aws-cli">Configure AWS CLI 🚀</h4>
<p>1. Create an IAM user that has an EC2 creation policy.</p>
<p>I recommend you to create PowerAccessPolicy for Practice.</p>
<p>For more details, you can refer to this: <a target="_blank" href="https://medium.com/@sumit.rasal301/creating-the-iam-in-the-aws-c363d86a1fde">How to create IAM user</a></p>
<p>2. You will get ACCESS KEY and SECRET KEY using that you can now configure to aws on CLI.</p>
<p><strong>aws configure — profile “name”</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729427041/934ba0e7-da2f-4281-8383-a7ba02c02427.png" alt /></p>
<h4 id="heading-how-to-use-aws-cli">How to use AWS CLI? 🧐</h4>
<p>Using CLI you can achieve almost everything that you can do using AWS management console but with less efforts and clicks.</p>
<p>Lets try something using AWS CLI,</p>
<p>🔅 Create a key pair<br />🔅 Create a security group<br />🔅 Launch an instance using the above created key pair and security group.<br />🔅 Create an EBS volume of 1 GB.<br />🔅 The final step is to attach the above created EBS volume to the instance you created in the previous steps.</p>
<p>Mostly I use the help command instead of remembering commands.<br />AWS CLI documentation is informative and well described with examples.</p>
<p><strong>$ aws help</strong></p>
<p><strong>Create Key Pair</strong></p>
<p>To create keypair we have to use ec2 service of AWS. by using<br /><em>$ aws ec2 help</em><br />you will see all the subcommands under ec2.</p>
<p>To create key pair and save it in proper format(.pem), use below command.</p>
<p>Note: Windows users use PowerShell for above commands. To activate PowerShell on normal cmd. Enter <em>$ PowerShell</em> command. OR <em>Win + R</em> and enter <em>PowerShell</em> to open Powershell.</p>
<p><strong>Create a Security Group</strong></p>
<p>You may want to create a security group in a specific VPC. For that, we need vpc id. Let's find VPC id using AWS CLI.</p>
<p>above command will return the list of VPC’s and tags associated with VPC.<br />copy the vpc id you want to create SG in and save it somewhere.<br />Now lets create security group.</p>
<p>Dont forget to edit the command according to your need. You can change name, description and vpc id.</p>
<p><strong>Create security group rules</strong></p>
<p>we want to add rules in the security group that we have created above.<br />If you copy-paste the security group id then skip the below command..<br />Lets find the security group name and id.</p>
<p>the above command will give you a list of security group names and ids.<br />copy the id that we want to add new rule and paste it in below command.</p>
<p>the above command will create a new rule for ssh.. you can always customize as per your need.. you know that right?<br />You can check more examples for multiple rules <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/reference/ec2/authorize-security-group-ingress.html#examples">here..</a></p>
<p>Check that rule added successfully or not using describe subcommand.</p>
<p><strong>Create a new instance</strong></p>
<p>You may want to launch an instance in a specific availability zone. let's see with AZ are available in the region you specified while configuring.</p>
<p>We want to attach one more ebs volume to our instance and for that both should in same availability zone. Fix your availability zone for that.<br />AWS CLI has very well described documentation for ec2.. lets take help of that. It will give more examples and necessary information. You always need something different than I used here.. so go and use<br /><em>$ aws ec2 help</em></p>
<p>To launch an instance we need ami id of the image that we want to launch..</p>
<p>$ <strong>aws ec2 describe-images</strong></p>
<p>I have used the key pair and security group that we have created above. I choose the ap-south-1a availability zone and amazon Linux ami to launch this instance.<br />You can to much more customization to this command. Check this aws Documentation.<a target="_blank" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html">(open)</a><br />copy instance id and save it somewhere.. we will see later where we need.</p>
<p><strong>Create EBS volume</strong></p>
<p>Our goal is to create a new volume and attach to an instance that we have just created. So we need to create it in the same Availability Zone.</p>
<p>Update the above command as per your need. (size in GiBs).<br />copy volume id and save it somewhere.</p>
<p><strong>Attach EBS volume to instance.</strong></p>
<p>update the instance id and volume id in the above command that we have copied and save.. remember?</p>
<p>That's all, you may find that lots of commands we used. but you can create one script combining all the commands. You can modify and run whenever it is necessary.<br />If you are already using AWS using WebUI. I recommend you to give to try to AWS CLI. You will notice the time you save by using CLI.</p>
<h4 id="heading-wrapping-up">Wrapping Up 🥳</h4>
<p>So far we have created a new key pair, security group, add ssh rule into it, provision new instance, create new ebs volume and attach to the instance.<br />but… you can do much more than this.<br />Tell me did you find this article helpful and will you use aws cli tool?<br />in the comment section below.<br />Thank you..</p>
<p>[<strong>Deploying Angular App to AWS S3 with CloudFront using AWS CLI</strong><br /><em>Deploy angular website on aws using aws CLI</em>medium.com](https://medium.com/@developer.shubham.rasal/deploying-angular-app-to-aws-s3-with-cloudfront-using-aws-cli-ace33350a950 "https://medium.com/@developer.shubham.rasal/deploying-angular-app-to-aws-s3-with-cloudfront-using-aws-cli-ace33350a950")<a target="_blank" href="https://medium.com/@developer.shubham.rasal/deploying-angular-app-to-aws-s3-with-cloudfront-using-aws-cli-ace33350a950"></a></p>
]]></content:encoded></item><item><title><![CDATA[Building Scalable Applications and Microservices on AWS.]]></title><description><![CDATA[AWS-8
How is AWS helping to build microservices architecture to solve the new age problem?
Introduction 🤓
In today’s age, the world is moving to faster development and scalable approach.🌩️
What is microservices?
Microservices architectures are not ...]]></description><link>https://blog.shubhcodes.tech/building-scalable-applications-and-microservices-on-aws-b57672d24378</link><guid isPermaLink="true">https://blog.shubhcodes.tech/building-scalable-applications-and-microservices-on-aws-b57672d24378</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Mon, 21 Sep 2020 17:22:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729447778/61955051-e842-4584-9a53-2b15db5a213f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-aws-8">AWS-8</h4>
<h4 id="heading-how-is-aws-helping-to-build-microservices-architecture-to-solve-the-new-age-problem">How is AWS helping to build microservices architecture to solve the new age problem?</h4>
<h3 id="heading-introduction"><strong>Introduction</strong> 🤓</h3>
<p>In today’s age, the world is moving to faster development and scalable approach.<a target="_blank" href="https://emojipedia.org/cloud-with-lightning/">🌩️</a></p>
<h4 id="heading-what-is-microservices">What is microservices?</h4>
<p>Microservices architectures are not a completely new approach to software engineering, but rather a combination of various successful and proven concepts such as:<br />• Agile software development<br />• Service-oriented architectures<br />• API-first design<br />• Continuous Integration/Continuous Delivery (CI/CD)</p>
<p>Microservices architectures make applications easier to scale and faster to develop, enabling innovation and accelerating time-to-market for new features.</p>
<h3 id="heading-monolithic-vs-microservices-architecture">Monolithic vs. Microservices Architecture</h3>
<p>This is important to understand why we microservices architecture.</p>
<h4 id="heading-monolithic-architecture"><strong>Monolithic architecture.</strong>🖥</h4>
<p>With Monolithic architectures, all the processes are tightly bonded and it runs as a single service. Suppose you have an application and suddenly one of the processes of the application experiences spikes in demand and this makes your application slow response to requests. If you want to solve this issue you have to scale the entire architecture. Adding and improving the monolithic application’s feature becomes more complex as the code base grows. This process results in a limitation of experimentation and makes it difficult to implement new ideas. Monolithic architectures add risk for application availability because many dependent and tightly coupled processes increase the impact of single process failure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729435115/20c79bd4-3bb5-4456-8638-cf3c40700efe.png" alt /></p>
<h4 id="heading-microservices-architecture">Microservices Architecture 🖥🖥🖥</h4>
<p>With a microservices architecture, an application is built as independent components that run each application process as a service. These services communicate via a well-defined interface using lightweight APIs. Services are built for business capabilities and each service performs a single function. Because they are independently run, each service can be updated, deployed, and scaled to meet the demand for specific functions of an application. Consider if you have one service and suddenly one service spike in demand at that time you can simply scale only that service.</p>
<p>There are much more interesting to know more characteristics and benefits of microservices. Recommended to this article.</p>
<p><a target="_blank" href="https://aws.amazon.com/microservices/"><strong><em>https://aws.amazon.com/microservices/</em></strong></a></p>
<h3 id="heading-microservices-implementations">Microservices Implementations</h3>
<h3 id="heading-the-most-complete-platform-for-microservices">The Most Complete Platform for Microservices ❤️</h3>
<p>AWS has integrated building blocks that support any application architecture, regardless of scale, load, or complexity</p>
<h3 id="heading-containers-on-aws">Containers on AWS</h3>
<h4 id="heading-amazon-elastic-container-service"><strong>Amazon Elastic Container Service</strong></h4>
<p>The most secure, reliable, and scalable way to run containers. <strong>AWS is the #1 place for you to run containers and 80% of all containers in the cloud run on AWS.</strong> Customers such as Samsung, Expedia, KPMG, GoDaddy, and Snap choose to run their containers on AWS because of our security, reliability, and scalability.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729437368/64451c58-f6e1-481d-ab43-9c47c02c8a59.png" alt /></p>
<p>ECS</p>
<p>Using the ECS service of AWS can run containerized applications or build microservices. Containers provide process isolation that makes it easy to break apart and run applications as independent components called microservices.</p>
<h3 id="heading-serverless">Serverless</h3>
<h4 id="heading-aws-lambda"><strong>AWS Lambda</strong></h4>
<p>AWS Lambda lets you run code without provisioning or managing servers. Just upload your code and Lambda manages everything that is required to run and scale your code with high availability.</p>
<p>Check how you can create a simple microservice using Lambda and API Gateway <a target="_blank" href="https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway-blueprint.html"><strong><em>here</em></strong></a><strong><em>.</em></strong></p>
<h3 id="heading-service-mesh">Service Mesh</h3>
<p><strong>AWS App Mesh</strong></p>
<p>AWS App Mesh makes it easy to monitor and control microservices running on AWS. App Mesh standardizes how your microservices communicate, giving you end-to-end visibility, and helping to ensure high-availability for your applications.</p>
<h3 id="heading-container-orchestration">Container Orchestration</h3>
<p><strong>EKS</strong></p>
<p><a target="_blank" href="https://aws.amazon.com/eks/">Amazon EKS</a> is a managed service that makes it easy for you to run Kubernetes on AWS without needing to operate your own Kubernetes cluster.</p>
<p><strong><em>Read more:</em></strong> <a target="_blank" href="https://aws.amazon.com/blogs/containers/getting-started-with-app-mesh-and-eks/"><strong><em>Microservices using AWS App Mesh and Amazon EKS</em></strong></a></p>
<p>There are many more services useful to implement a microservices architecture.</p>
<p>AWS is the most complete platform for microservices. AWS offers many services for computing, storage and database, Networking, messaging, logging, and monitoring, DevOps.</p>
<p>Now let’s get into some real and success stories to know how they implement microservices on AWS.</p>
<h3 id="heading-aws-success-stories">AWS Success Stories</h3>
<h3 id="heading-coursera">Coursera</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729438604/34a7f61e-e1ad-4a39-9a34-5eaa18259e5d.jpeg" alt /></p>
<p><a target="_blank" href="https://www.coursera.org/"><strong>Coursera</strong></a> is an educational technology company with a mission to provide universal access to the world’s best curricula.</p>
<h4 id="heading-what-challenges-do-they-face"><strong>What challenges do they face?</strong></h4>
<ul>
<li>Coursera had a large monolithic application for processing batch jobs that were difficult to run, deploy, and scale.</li>
<li>A new thread was created whenever a new job needed to be completed, and each job took up different amounts of memory and CPU, continually creating inefficiencies.</li>
<li>A lack of resource isolation allowed memory-limit errors to bring down the entire application.</li>
<li>The infrastructure engineering team attempted to move to a microservices architecture using Docker containers, but they ran into problems as they tried to use Apache Mesos to manage the cluster and containers — Mesos was complicated to set up and Coursera didn’t have the expertise or time required to manage a Mesos cluster.</li>
</ul>
<h4 id="heading-how-they-used-aws"><strong>How they used AWS?</strong></h4>
<ul>
<li>They used Docker containers on Amazon EC2 Container Service(ECS) which enables Coursera to easily move to a microservices-based architecture.</li>
<li>Each job is created as a container and Amazon ECS schedules the container across the Amazon EC2 instance cluster.</li>
<li>Amazon ECS handles all the cluster management and container orchestration, and containers provide the necessary resource isolation.</li>
</ul>
<h4 id="heading-what-are-the-benefits">What are the benefits?</h4>
<ul>
<li>Launched a prototype in less than two months</li>
<li>Reduced time to deploy software changes from hours to minutes</li>
<li>Reduced engineering time spent installing software and maintaining clusters</li>
</ul>
<p>Read more about the Coursera case study<strong>(</strong><a target="_blank" href="https://aws.amazon.com/solutions/case-studies/coursera-ecs/"><strong>here)</strong></a><strong>.</strong></p>
<h3 id="heading-brainly">Brainly</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729440854/60b4d34f-1cfd-4e8e-8091-9c823f862c64.png" alt /></p>
<p>Using AWS, Brainly eliminates outages and reduces virtual server costs by 60 percent. Brainly is a peer-to-peer learning community and educational technology company. The Brainly platform runs on Amazon EC2 Reserved Instances and Spot Instances, with <strong>Amazon ElastiCache</strong> providing caching services, while <strong>Amazon Elastic Kubernetes Service orchestrates microservices containers.</strong></p>
<h4 id="heading-benefits-of-aws">Benefits of AWS</h4>
<ul>
<li>Eliminates outages</li>
<li>Reduces virtual server costs by 60 percent</li>
<li>Avoids the cost of building cloud expertise</li>
<li>Increases speed of orchestration services 2–3 times</li>
<li>Keeps IT team down to four people</li>
<li>Enables continuous innovation</li>
</ul>
<h3 id="heading-bridestory">Bridestory</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729442420/7c5e86ee-3cb4-4b08-95ad-d8323554aef6.jpeg" alt /></p>
<p>Bridestory uses microservices and moved from monolithic architecture.</p>
<h3 id="heading-same-architecture-different-brand">Same Architecture, Different Brand</h3>
<p>The new microservices architecture has transformed operations. Hanafi reports, “Since the end of 2018, we have been hitting our metrics. Previously, we were averaging three weeks to launch one big feature, whereas now our developers are empowered to conduct small releases daily, with less than a 1 percent failure rate.” The company has also launched a new app called Parentstory using its containerized infrastructure. “Isolation is the key to our new multi-tenancy microservices model,” he explains. “We are constantly exploring with AWS architects how we can use the same architecture with the same source code, but with a totally different brand, different customers, and different database.” With this “recycled” approach, the team was able to launch the Parentstory app nearly three times faster than Bridestory.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729444583/4daee731-528c-44c3-ab50-2cfecb0a33a6.jpeg" alt /></p>
<p>Today, a new generation of companies is navigating a journey to AWS, and those companies have a very different set of challenges to overcome. They are not the same challenges faced by the first wave of born-in-the-cloud AWS users, but they are surmountable and are being continually addressed so that companies can reap the benefits of the cloud.<br />I hope you find this article informative and useful.<br />I would like to connect and learn your thoughts on the New Age of Cloud Computing and many more…<br />Connect with me on LinkedIn.</p>
<p><a target="_blank" href="https://www.linkedin.com/in/shubham-rasal/">https://www.linkedin.com/in/shubham-rasal/</a></p>
<p>If you find this informative and helpful... Don’t forget to👏👏 .</p>
]]></content:encoded></item><item><title><![CDATA[What is Big Data? How big companies manage Big Data?]]></title><description><![CDATA[1-Big Data
How big MNC’s like Google, Facebook, Instagram, etc stores, manages, and manipulate Thousands of Terabytes of data with High Speed and High Efficiency
What is Big Data?🤔
We are going to discuss something interesting today and that is “Big...]]></description><link>https://blog.shubhcodes.tech/what-is-big-data-how-big-companies-manage-big-data-21124d639d50</link><guid isPermaLink="true">https://blog.shubhcodes.tech/what-is-big-data-how-big-companies-manage-big-data-21124d639d50</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Thu, 17 Sep 2020 14:39:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728994297/0f0a892a-6675-4b0b-9cd1-8a6aa397198f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4 id="heading-1-big-data">1-Big Data</h4>
<h4 id="heading-how-big-mncs-like-google-facebook-instagram-etc-stores-manages-and-manipulate-thousands-of-terabytes-of-data-with-high-speed-and-high-efficiency">How big MNC’s like Google, Facebook, Instagram, etc stores, manages, and manipulate Thousands of Terabytes of data with High Speed and High Efficiency</h4>
<h3 id="heading-what-is-big-datahttpsemojipediaorgthinking-face">What is Big Data?<a target="_blank" href="https://emojipedia.org/thinking-face/">🤔</a></h3>
<p>We are going to discuss something interesting today and that is “Big Data”.<br />Before moving next let's understand what is big data.<br />We are constantly generating data.. even nowadays our kitchen appliances are now connected to the internet and sharing and storing mountains of data.</p>
<p>The amount of information being collected around the world is soo much big to process. That’s why our topic comes in play i.e Big Data.</p>
<p>In simple words, big data has huge amount of raw data and it is too complex for traditional software to process it. we need to study that data for business to grow like Facebook and Walmart and many more.</p>
<h4 id="heading-lets-take-some-examples-so-we-can-get-more-idea-of-big-datahttpsemojipediaorgexploding-head">Let's take some examples so we can get more idea of big data.<a target="_blank" href="https://emojipedia.org/exploding-head/">🤯</a></h4>
<ul>
<li>People are generating 2.5 quintillion bytes of data each day.</li>
<li>Nearly 90% of all data has been created in the last two years</li>
<li>Walmart handles more than 1 million customer transactions every hour.</li>
<li>Facebook generates 500 Terabytes of data each day.</li>
<li>Google currently processes over 20 petabytes of data per day.</li>
</ul>
<p>Daily smartphone and computer usage mean that the volume of data is expanding rapidly. The average user shares dozens of media links daily, and all of that has to be stored somewhere.</p>
<p>Boeing 737 — the plane used by many carriers on this route — the total amount of data generated would be a massive 240 terabytes of data.<br />You can read more about it. <a target="_blank" href="https://gigaom.com/2010/09/13/sensor-networks-top-social-networks-for-big-data-2/">(<em>click here)</em></a></p>
<p>Over 2.5 quintillion bytes of data are created every single day, and it’s only going to grow from there. By 2020, it’s estimated that 1.7MB of data will be created every second for every person on earth. <a target="_blank" href="https://www.socialmediatoday.com/news/how-much-data-is-generated-every-minute-infographic-1/525692/#:~:text=This%20is%20the%20sixth%20edition,for%20every%20person%20on%20earth.%22"><em>(Read more)</em></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685728994297/0f0a892a-6675-4b0b-9cd1-8a6aa397198f.png" alt /></p>
<p><a target="_blank" href="https://www.socialmediatoday.com/">https://www.socialmediatoday.com/</a></p>
<p><strong>How to store this much data now?</strong> <a target="_blank" href="https://emojipedia.org/face-with-monocle/">🧐</a><br />Softwares are available to solve this issue like</p>
<ul>
<li>Hadoop</li>
<li>HBase</li>
<li>Hive</li>
</ul>
<p>We will discuss little about Hadoop not going any technical.<br />To solve the problem of Big data a new concept was introduced known as <strong>the distributed storage system,</strong> and the product of this concept is known as <strong>Hadoop.<br />What is Distributed Storage</strong>? A <strong>distributed storage</strong> system is an infrastructure that can split data across multiple physical servers, and often across more than one data center. It typically takes the form of a cluster of <strong>storage</strong> units, with a mechanism for data synchronization and coordination between cluster nodes.<br />let's take an example.. suppose we have one file of 50Gb and we want to store it somewhere. If we try to store it one hard disk it will take time to store it.. and if store this file into different 50 machines then so much less time to the previous one.</p>
<p>Big Data demands a cost-effective, innovative solution to store and analyze it. Hadoop is the answer to all Big Data requirements. So, let’s explore why Hadoop is so important.<br /><strong>“Hadoop Market</strong> is expected to reach <strong>$99.31B</strong> by <strong>2022</strong> at a <strong>CAGR</strong> of <strong>42.1%”.</strong></p>
<h4 id="heading-why-hadoop-httpsemojipediaorgslightly-smiling-face">Why Hadoop? <a target="_blank" href="https://emojipedia.org/slightly-smiling-face/">🙂</a></h4>
<ul>
<li>Hadoop provides a cost-effective storage solution for business.</li>
<li>It facilitates businesses to easily access new data sources and tap into different types of data to produce value from that data.</li>
<li>It is a highly scalable storage platform</li>
<li>Hadoop is fault tolerance. When data is sent to an individual node, that data is also replicated to other nodes in the cluster, which means that in the event of failure, there is another copy available for use.</li>
<li>Hadoop is more than just a faster, cheaper database and analytics tool. It is designed as a scale-out architecture that can affordably store all of a company’s data for later use.</li>
</ul>
<p><em>Hope you found this post to be informative.<br />Comment down your thoughts on Big data.<br />Share it with your friends and lets connects on</em> <a target="_blank" href="https://www.linkedin.com/in/shubham-rasal/"><em>LinkedIn</em></a> <em>to get more updates.<br />Please do not hesitate to keep 👏👏👏👏👏 it.</em></p>
<p>Thank you and stay motivated…</p>
]]></content:encoded></item><item><title><![CDATA[Deploy the WordPress application on Kubernetes and AWS RDS using terraform]]></title><description><![CDATA[Deploy the WordPress application on Kubernetes and AWS RDS using terraform
Infrastructure as code using Terraform, which automatically deploys the WordPress application using AWS RDS service.
What do we want to do?
Deploy the WordPress application on...]]></description><link>https://blog.shubhcodes.tech/deploy-the-wordpress-application-on-kubernetes-and-aws-rds-using-terraform-e8501e1e9651</link><guid isPermaLink="true">https://blog.shubhcodes.tech/deploy-the-wordpress-application-on-kubernetes-and-aws-rds-using-terraform-e8501e1e9651</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Tue, 01 Sep 2020 17:36:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729031465/8280aaca-85e2-406c-a6a1-208f35f550f5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Deploy the WordPress application on Kubernetes and AWS RDS using terraform</p>
<h4 id="heading-infrastructure-as-code-using-terraform-which-automatically-deploys-the-wordpress-application-using-aws-rds-service">Infrastructure as code using Terraform, which automatically deploys the WordPress application using AWS RDS service.</h4>
<h3 id="heading-what-do-we-want-to-do">What do we want to do?</h3>
<p>Deploy the WordPress application on Kubernetes and AWS using terraform including the following steps;</p>
<p>1. Write an Infrastructure as code using Terraform, which automatically deploy the WordPress application</p>
<p>2. On AWS, use RDS service for the relational database for WordPress application.</p>
<p>3. Deploy WordPress as a container on top of Minikube.</p>
<p>4. The WordPress application should be accessible from the public world if deployed on AWS or through workstation if deployed on Minikube.</p>
<p>My main intention is to show how we can use the RDS service of AWS in our WordPress application. Here I am using WordPress application on Kubernetes over Minikube.</p>
<h3 id="heading-pre-requisite">Pre-requisite :</h3>
<ol>
<li>AWS account.</li>
<li><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html">Download AWS CLI</a> and <a target="_blank" href="https://medium.com/@developer.shubham.rasal/create-aws-ec2-instance-using-terraform-3a3a2d273048">Configure it</a>.</li>
<li>Download terraform. <a target="_blank" href="https://www.terraform.io/downloads.html"><em>Download</em></a></li>
<li>Download Minkube.</li>
</ol>
<h3 id="heading-lets-start">Let’s Start…</h3>
<h3 id="heading-1-configure-the-providers">1. Configure the Providers</h3>
<p>We want to use two services AWS and Kubernetes. for that we need to use AWS and Kubernetes providers in Terraform.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729002632/c3bcd6f2-5d18-4dd4-b3d6-40e1ec934c49.png" alt /></p>
<p>Providers</p>
<h3 id="heading-2-start-minikube">2. Start Minikube</h3>
<p>To start Minikube open VirtualBox and start manually or give the command</p>
<p>$ minikube start</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729004768/9d8d0045-3645-49cc-bdf3-cedc7597cec4.png" alt /></p>
<p>minikube start</p>
<h3 id="heading-3-create-rds-service">3. Create RDS Service</h3>
<p>We want to use RDS to store the WordPress database. so we will use MYSQL community edition.</p>
<p>first, we will create a security group that will allow traffic only for MySQL port.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729007132/7a198139-04c6-438f-80e7-c8de4e9e64c9.png" alt /></p>
<p>Security Group</p>
<p>Now we will create an RDS instance for MYSQL.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729008723/0e66b4d7-afbf-4aad-8352-6c2880b1e88d.png" alt /></p>
<p>RDS</p>
<p>Here we have taken 20GB Memory and t2.micro type for RDS instance. Added username and password attributes. You need to put your specifications. or you can also use a variable to take interactive input.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729011313/27f9027d-fcac-4f81-bb21-155cd517a32f.jpeg" alt /></p>
<h3 id="heading-3-launch-wordpress-on-k8s">3. Launch WordPress on K8s</h3>
<p>Here, for this demo, I am using locally installed minikube but you can use services like EKS or target services on AWS.</p>
<h4 id="heading-create-deployment-for-wordpress">Create deployment for WordPress.</h4>
<p><img src="https://cdn-images-1.medium.com/max/800/1*z_a90IkzyoR2pWXQFmV4OQ.png" alt /></p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*sc4SUc8fFxreyHpU42Cszg.png" alt /></p>
<p>using deployment we will achieve high availability. I have selected WordPress 4.8-apache container. The main part here is we are setting some variables of WordPress like WORDPRESS_DB_HOST, WORDPRESS_DB_USER, WORDPRESS_DB_PASSWORD, WORDPRESS_DB_NAME. To set these properties we are getting values to form the RDS instance. but to access WordPress, we need to expose it.</p>
<h4 id="heading-expose-wordpress">Expose WordPress.</h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729017248/22507ca3-0b26-4584-adc0-dcf625bc8ed0.png" alt /></p>
<p>expose</p>
<p>Above kubernetes_service resource selects our WordPress pod and give service as Nodeport and exposing it.</p>
<h3 id="heading-3-check-output">3. Check output</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729019988/94a190fc-dd28-4639-80e8-071296b3232f.png" alt /></p>
<p>output</p>
<p>The above code gives the URL and port number where WordPress is exposed. To get the first half of the URL which I had put statically here. you can check using below command</p>
<p>$ minikube ip  </p>
<p>Select language and add admin details. after that, you will see the login tab like below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729023178/8b55be3e-b448-4f00-90fe-87030e9fa4bd.jpeg" alt /></p>
<p>login</p>
<p>Add Post</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*6dXWBfcOrgQDYFs_1ICTow.jpeg" alt /></p>
<p>Publish and Check.</p>
<p>You will see the output like above.</p>
<h3 id="heading-check-the-solution">Check the solution</h3>
<p>Now delete the pod manually and it will be launch again…</p>
<p>Again check WordPress our previous blog exists or not.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*iQh1F0ZWcpsMemqvdsjqBQ.jpeg" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729029527/3e095aec-2679-4cfd-9b1e-a8f6e05cb557.jpeg" alt /></p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>We have successfully created Infrastructure as Code for deploying the WordPress application on Kubernetes and AWS RDS using terraform</p>
<blockquote>
<p><strong><em>I want to express my gratitude for everything that</em></strong> <a target="_blank" href="https://www.linkedin.com/in/vimaldaga/"><strong><em>Mr. Vimal Daga</em></strong></a> <strong><em>sir have helped me achieve this knowledge and thank you for the cheering and mentoring us…..</em></strong></p>
</blockquote>
<p>If you like my efforts then don’t forget to give feedback in the comment box. Feel free to ask questions and doubts in the comments.</p>
<p>#LEARN — SHARE — GROW</p>
]]></content:encoded></item><item><title><![CDATA[AWS Networking using Terraform.]]></title><description><![CDATA[Creating VPC, subnets, Internet Gateway, NAT Gateway, Route Table, Bastion host, Servers.
2
Creating VPC, subnets, Internet Gateway, NAT Gateway, Route Table, Bastion host, Servers.
What do we want to do?
Statement: We have to create a web portal for...]]></description><link>https://blog.shubhcodes.tech/aws-networking-using-terraform-cbbf28dcb124</link><guid isPermaLink="true">https://blog.shubhcodes.tech/aws-networking-using-terraform-cbbf28dcb124</guid><dc:creator><![CDATA[Shubham Rasal]]></dc:creator><pubDate>Sun, 26 Jul 2020 11:14:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729455897/96c29e19-6529-441b-8d86-0eadb0870bf7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729455897/96c29e19-6529-441b-8d86-0eadb0870bf7.png" alt /></p>
<p>Creating VPC, subnets, Internet Gateway, NAT Gateway, Route Table, Bastion host, Servers.</p>
<h4 id="heading-2">2</h4>
<h4 id="heading-creating-vpc-subnets-internet-gateway-nat-gateway-route-table-bastion-host-servers">Creating VPC, subnets, Internet Gateway, NAT Gateway, Route Table, Bastion host, Servers.</h4>
<h3 id="heading-what-do-we-want-to-do">What do we want to do?</h3>
<p>Statement: We have to create a web portal for our company with all the security as much as possible. So, we use the WordPress software with a dedicated database server.</p>
<p>The database should not be accessible from the outside world for security purposes. We only need public WordPress for clients. So here are the steps for proper understanding!</p>
<h3 id="heading-steps">Steps:</h3>
<p>1. Write an Infrastructure as code using Terraform, which automatically creates a VPC.</p>
<p>2. In that VPC we have to create 2 subnets:</p>
<p>a) public subnet [ Accessible for Public World! ]</p>
<p>b) private subnet [ Restricted for Public World! ]</p>
<p>3. Create a public-facing internet gateway to connecting our VPC/Network to the internet world and attach this gateway to our VPC.</p>
<p>4. Create a routing table for Internet gateway so that instance can connect to outside world, update and associate it with the public subnet.</p>
<p>5. Create a NAT gateway to connecting our VPC/Network to the internet world and attach this gateway to our VPC in the public network</p>
<p>6. Update the routing table of the private subnet, so that to access the internet it uses the NAT gateway created in the public subnet</p>
<p>7. Launch an ec2 instance that has WordPress setup already having the security group allowing port 80 so that our client can connect to our WordPress site. Also, attach the key to the instance for further login into it.</p>
<p>8. Launch an ec2 instance that has MYSQL setup already with security group allowing port 3306 in a private subnet so that our WordPress VM can connect with the same. Also, attach the key with the same.</p>
<p>Note: WordPress instance has to be part of the public subnet so that our client can connect our site. MySQL instance has to be part of a private subnet so that the outside world can’t connect to it.</p>
<p>Don’t forget to add auto IP assign and auto DNS name assignment options to be enabled.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729457702/1c8d2429-82b4-4eec-9f92-fe746fe7704f.png" alt /></p>
<p>I have used Code snippets images for better visualization. You can find code this code file in this <a target="_blank" href="https://github.com/ShubhamRasal/portfolio">Github repository</a>.</p>
<h3 id="heading-pre-requisite">Pre-requisite :</h3>
<ol>
<li>AWS account.</li>
<li><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html">Download AWS CLI</a> and <a target="_blank" href="https://medium.com/@developer.shubham.rasal/create-aws-ec2-instance-using-terraform-3a3a2d273048">Configure it</a>.</li>
<li>Download terraform. <a target="_blank" href="https://www.terraform.io/downloads.html"><em>Download</em></a></li>
</ol>
<h3 id="heading-lets-start">Let’s Start…</h3>
<h3 id="heading-1-configure-the-provider">1. Configure the Provider</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729459932/a0d47c3d-58cd-4ae2-8b49-6279e23759ef.png" alt /></p>
<h3 id="heading-2-create-vpc">2. Create VPC</h3>
<p>Below code creates VPC with given cider block, the tenancy is default and we want DNS hostname URL for that we are enabling it. VPC will create in the region you have mentioned in the above provider resource.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729461372/2ba2e808-118e-4c88-9a7c-3c97d8a28d90.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729462881/51a4d236-6fa7-468b-b850-9cb259e49af6.png" alt /></p>
<h3 id="heading-2-create-two-subnets-in-vpc">2. Create two subnets in VPC</h3>
<p>Here we are just creating two subnets and we named public and private. A VPC spans all of the Availability Zones in the Region. After creating a VPC, we want to add <strong>subnets</strong> in two different Availability Zone.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729465153/2a8ac663-938b-48b6-baf2-6b0a35b79629.png" alt /></p>
<p>above code will create two subnets named <em>‘public_subnet</em>’ and ‘<em>private_subnet</em>’ in <strong>‘<em>ap-south-1a’</em></strong> and <strong>‘<em>ap-south-1b’</em></strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729467397/5eb7c48c-e289-48b1-a703-9a3c6e6d14a3.png" alt /></p>
<h3 id="heading-3-create-internet-gateway">3. Create Internet Gateway</h3>
<p>An <strong>internet gateway</strong> is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the <strong>internet</strong>. <a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html">Read More</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729470041/815d0a17-73f3-4e59-9500-ed7e658a2c92.png" alt /></p>
<p>we want to connect our VPC to the internet so that’s why we need to create Internet Gateway.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729472497/252d165b-71db-4a09-9be8-367ffbce7ada.png" alt /></p>
<h3 id="heading-4-create-a-route-table">4. Create a Route Table</h3>
<p>A <strong>route table</strong> contains a set of rules, called <strong>routes</strong>, that are used to determine where network traffic from your subnet or gateway is directed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729474693/a4daabfb-b5f1-450e-abc6-81ef1496ca9b.png" alt /></p>
<p>to create a route table we have used <em>aws_route_table</em> resource. We want to access the internet so add cider_block to <code>0.0.0.0/0</code> , (quad-zero route) and use the internet gateway that we create above. The destination for the route is, which represents all IPv4 addresses. The target is the internet gateway that's attached to your VPC.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729477389/24e20f91-8289-4b04-a3d8-f1cfe41dfc63.png" alt /></p>
<p>To read more about the Route table you can refer to this. <a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html">ROUTE_TABLE</a></p>
<h3 id="heading-5-associate-route-table-with-public-subnet">5. Associate Route table with Public Subnet</h3>
<p>Now we want to use the internet only for public subnet so associate route table with that subnet only. To do this we will use <em>aws_route_table_association</em> resource in terraform which takes subnet id and route table id.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729479945/fda72745-2396-4c5e-a6df-ddc661fbc585.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729481709/62335cc1-f32d-41d2-a40e-f056d0dd4a1f.png" alt /></p>
<p>Now we have created VPC and created two subnets in different Availability Zones in the region and given Internet connection by creating an Internet gateway for VPC. Now lets launch instances of WordPress and MySQL.</p>
<h3 id="heading-6-create-nat-gateway">6. Create NAT Gateway</h3>
<p>A Network Address Translation (NAT) gateway is a device that helps enabling EC2 instances in a private subnet to connect to the Internet and prevent the Internet from start off a connection with those instances.</p>
<h4 id="heading-1create-eip-for-nat-gateway">1.Create EIP for NAT Gateway</h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729484266/b8c4f3b7-9026-4198-b90d-062dd8b193ac.png" alt /></p>
<h4 id="heading-2-nat-gateway">2. NAT Gateway</h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729486856/b16fa1f5-2b2d-4cfc-b76b-bc776dc3f913.png" alt /></p>
<p>Create NAT gateway using <em>aws_nat_gateway</em> resource of terraform. NAT gateway should be in public subnet and it requires EIP so that it can rapidly remap the address in case of failure.</p>
<h3 id="heading-7-create-route-table-for-private-subnet-and-nat-gateway">7. Create Route Table for private subnet and NAT gateway</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729488310/8edbe62b-86ba-4bdc-bcb1-8f1270fef812.png" alt /></p>
<p>As per need, we want to get internet connection in a private subnet so that we create one more route table with nat gateway.</p>
<h3 id="heading-8-associate-route-table-to-the-private-subnet">8. Associate Route Table to the private subnet.</h3>
<p><img src="https://cdn-images-1.medium.com/max/800/1*zURZ9tnnDewJBftr53g8-g.png" alt /></p>
<p>above code creates association between route table which has NAT gateway and private subnet.</p>
<h4 id="heading-we-want-to-use-database-instance-which-is-in-a-private-subnet-thats-why-we-can-not-access-it-through-ssh-from-our-machineinternet-if-we-want-to-connect-and-manage-instances-in-private-subnet-we-have-to-create-one-more-instance-which-will-configure-and-manages-instances-in-vpc-that-instance-is-called-as-bastion-host-or-jump-server">We want to use database instance which is in a private subnet that’s why we can not access it through ssh from our machine/internet. If we want to connect and manage instances in private subnet we have to create one more instance which will configure and manages instances in VPC. that instance is called as <strong>bastion host or Jump Server</strong></h4>
<h3 id="heading-10create-key-pair">10.Create Key Pair 🔑</h3>
<p><img src="https://cdn-images-1.medium.com/max/800/0*nbwXV9GBTGtj6OTR.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729495085/0ed15ad4-67e7-49d2-9ca0-9eb41d757663.png" alt /></p>
<p>We are using here tls_private_key for creating two resources and resource local_file is used to store this key locally. aws_key_pair is used to create key-pair in AWS and will attach this key in AWS.</p>
<h3 id="heading-9create-a-security-group-for-bastion-host-or-jump-server">9.Create a Security Group for Bastion Host or Jump Server</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729496785/8ff5c714-1349-4984-9ce6-ddfb5bab0d45.png" alt /></p>
<p>This security group is created for the bastion host and allow only ssh requests.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729499790/694339ef-ca8d-404e-82e1-a4fbac1978a0.png" alt /></p>
<h3 id="heading-10create-bastion-host-or-jump-server-in-public-subnet"><strong>10.Create Bastion Host or Jump Server in Public subnet</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729502005/c0fe9b49-a51e-4d62-b33b-691ee1470de3.png" alt /></p>
<p>Create an instance for bastion host in a public subnet, assign public IP and give Security Group that we have created allowing the only ssh.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729504681/a01e7b59-497a-4eb3-91e7-c599f955d0a2.png" alt /></p>
<h3 id="heading-11-create-a-security-group-which-allow-http-and-ssh-for-bastion-host">11. Create a Security group which allow HTTP and SSH for Bastion Host</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729507079/71efca24-c2ca-4eb2-85d2-8b85f8dd79ac.png" alt /></p>
<p>we have created one more security group which allows http inbound traffic from anywhere but ssh only from bastion host security group.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729511212/aa96489a-7345-4ddc-8ab8-7bc466bcde58.png" alt /></p>
<h3 id="heading-12-create-wordpress-instance-in-the-public-subnet">12. Create WordPress Instance in the public subnet</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729513536/53d52639-4070-40e0-a146-89ef6f1c4f75.png" alt /></p>
<p>Create an instance for your web portal in public subnet allowing https and ssh security group. Here I have used amazon Linux and attach keypair that we have created. Also, assign public IP using that we can request this server.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729516317/84dcd9dc-a620-48ef-8b75-99a7d3aeda90.png" alt /></p>
<h3 id="heading-13-create-a-security-group-for-mysql-and-ssh-for-bastion-host">13. Create a Security Group for MySQL and SSH for Bastion Host</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729518214/e3420d80-f3a9-45e0-8c1f-058023f32bec.png" alt /></p>
<p>Our Mysql server will be a private subnet. we want to open this server only for MySQL requests and ssh from the bastion host. Hence allow inbound/ingress for MySQL and ssh for bastion host only.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*FQ5XG-42Gl3EPXiuwgWveQ.png" alt /></p>
<h3 id="heading-14-create-mysql-server-in-a-private-subnet">14. Create MYSQL Server in a private subnet</h3>
<p><img src="https://cdn-images-1.medium.com/max/800/1*3AJqUxujqIkLT8WDtHTExA.png" alt /></p>
<p>Create MySQL server in the private subnet.attach keypair and give security group that we have created for this instance.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*LveLykLjsME5kj4cEZ0-Fw.png" alt /></p>
<h3 id="heading-15-how-to-run-this">15. How to Run this?😍🏆</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729525950/dfde25f7-f931-48cf-ac07-042570beba26.png" alt /></p>
<p>Now sit back and have a coffee. It will take a little time to create all the infrastructure for you.</p>
<p>as Code, we are done here. now we will check our setup is working or not.</p>
<p>we will check do we have internet connection in instance in private subnet or not. for that, we will transfer the key to the bastion host and connect to MySQL instance.</p>
<p>For transferring key I am using the WinSCP tool.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729528786/6b3594d0-5c0a-4f21-b45d-103d5512a044.png" alt /></p>
<p>Now I have transferred key to bastion host. we will check can we connect to instances in VPC through bastion host or jump server.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729532308/43275a3e-33a3-455f-b376-f69020590213.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685729533845/3bd0e6be-ee7e-45c4-afad-b981d4c96584.png" alt /></p>
<p>We can successfully ping to the outside world through MySQL instance in the private subnet.</p>
<p>Now, using bastion host you can configure webservers and database servers that we have created above. You can install WordPress on EC2 instance in the public subnet and install database service in the instance on private subnet which can go to the internet via NAT gateway.</p>
<h4 id="heading-you-will-find-all-the-code-in-this-github-repositoryhttpsgithubcomshubhamrasalaws-terraform">You will find all the code in this <a target="_blank" href="https://github.com/ShubhamRasal/aws-terraform">GITHUB Repository</a>.</h4>
<h3 id="heading-conclusion">Conclusion</h3>
<p>We have successfully created Infrastructure as Code for VPC in AWS for classic VPC i.e <strong>VPC with Public and Private Subnets.</strong></p>
<blockquote>
<p><strong><em>I want to express my gratitude for everything that</em></strong> <a target="_blank" href="https://www.linkedin.com/in/vimaldaga/"><strong><em>Mr. Vimal Daga</em></strong></a> <strong><em>sir have helped me achieve this knowledge and thank you for the cheering and mentoring us…..</em></strong></p>
</blockquote>
<p>If you like my efforts then don’t forget to give feedback in the comment box. Feel free to ask questions and doubts in comments.</p>
<p>#LEARN — SHARE — GROW</p>
]]></content:encoded></item></channel></rss>