Why you should use Kubernetes?

Kubernetes-1
Introduction to Kubernetes. Why we need Kubernetes? Use cases of Kubernetes.
Introduction š¤
When you talk to IT guys about containers, I am sure the next topic of conversion will be on container management and orchestration.
Hey, but what is a container?

Linux containers are technologies that allow you to package and isolate applications with their entire runtime environment-all of the files necessary to run. This makes it easy to move the contained application between the environments(dev, staging/test, prod, etc) while retaining full functionality. Containers help reduce conflicts between your development and operations teams by separating areas of responsibility.
Now next question then should be,
What is container orchestration?
Container orchestration automates the deployment, management, scaling, and networking of containers. The companies that need to deploy and manage hundreds or thousands of containers and hosts can benefit from container orchestration.
Container orchestration automates and manages tasks such as:
- Provisioning and deployment
- Configuration and scheduling
- Container availability
- Load balancing and traffic routing
- Scaling and removing containers

Now, knowing about containers and the need for orchestration leads to the next question
What is Kubernetes?š¤Æ
Kubernetes (also known as k8s or āKubeā) is an open-source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.

In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently.
Hereās how Dan Kohn, executive director of the Cloud Native Computing Foundation (CNCF), in a podcast with Gordon Haff, explained it: āContainerization is this trend thatās taking over the world to allow people to run all kinds of different applications in a variety of different environments. When they do that, they need an orchestration solution in order to keep track of all of those containers and schedule them and orchestrate them. Kubernetes is an increasingly popular way to do that.ā
History of Kubernetes
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Googleās experience running production workloads at scale with best-of-breed ideas and practices from the community.
Kubernetes was originally developed and designed by engineers at Google. Google was one of the early contributors to Linux container technology and has talked publicly about how everything at Google runs in containers. (This is the technology behind Googleās cloud services.)
Google generates more than 2 billion container deployments a week, all powered by its internal platform, Borg. Borg was the predecessor to Kubernetes, and the lessons learned from developing Borg over the years became the primary influence behind much of Kubernetes technology.
Fun fact: The 7 spokes in the Kubernetes logo refer to the projectās original name, āProject Seven of Nine.ā
Why you need Kubernetes? š„
Containers are a good and easy way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start.
You have to ssh to the server, then launch the container, again ssh to another server launch another one. maintain if any container fails⦠then again check and launch a new container.
Wouldnāt it be easier if this behavior was handled by a system?
Thatās how Kubernetes comes to the rescue! Kubernetes provides us an interface to run distributed systems smoothly. It takes care of scaling and failover for your application, provides deployment patterns, and more.
Kubernetes provides you with:
- Service discovery and load balancing: Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
- Storage orchestration: Kubernetes allows you to automatically mount a storage system of your choice, such as local storage, public cloud providers, and more.
- Automated rollouts and rollbacks: You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers, and adopt all their resources to the new container.
- Automatic bin packing: You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
- Self-healing: Kubernetes restarts containers that fail, replaces containers, kills containers that donāt respond to your user-defined health check, and doesnāt advertise them to clients until they are ready to serve.
- Secret and configuration management: Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
The Future of Kubernetes
According to the CNCF, Kubernetes is now the second-largest open source project in the world just behind Linux.

58% of respondents of the survey conducted by CNCF are using Kubernetes in production, while 42% are evaluating it for future use. In comparison, 40% of enterprise companies (5000+) are running Kubernetes in production.
In production, 40% of respondents are running 2ā5 clusters, 1 cluster (22%), 6ā10 clusters (14%), and more than 50 clusters (13% up from 9%).
As for which environment Kubernetes is being run in, 51% are using AWS (down from 57%), on-premise servers (37% down from 51%), Google Cloud Platform (32% down from 39%), Microsoft Azure (20% down from 23%), OpenStack (16% down from 22%), and VMware (15% up from 1%). The graph below illustrates where respondents are running Kubernetes vs. where theyāre deploying containers.

CASE STUDY: Spotify

Challenge
Spotify is an audio streaming platform launched in 2008 has grown over 200 million monthly active uses across the world. They wanted to empower creators and enable really immersive listening experience for all of the consumer that Spotify have. Spotify is an early adopter of microservices and Docker. Spotify had containerized microservices running across its fleet of VMs with a homegrown container orchestration system called Helios. By late 2017, it became clear that āhaving a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community,ā
Solution
To solve this challenge Spotify uses Kubernetes. The migration which happens in parallel with Helios running go very smoothly as āKubernetes fit very nicely as a compliment and it replaced Heliosā. Spotify gets benefited from added velocity and reduced cost and also aligns with the rest of the industry on best practices and tools.
Impact
The biggest service currently running on Kubernetes takes about 10 million requests per second as an aggregate service and benefits greatly from autoscaling, says Site Reliability Engineer James Wen. Plus, he adds, āBefore, teams would have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes.ā In addition, with Kubernetesās bin-packing and multi-tenancy capabilities, CPU utilization has improved on average two- to threefold.
Thank you.
About the writer:
Shubham loves technology, challenges, is open to learning and reinventing himself. He loves to share his knowledge. He is passionate about constant improvements.
He writes blogs about Cloud Computing, Automation, DevOps, AWS, Infrastructure as code.
Visit his Medium home page to read more insights from him.

š Join FAUN today and receive similar stories each week in your inbox! ļø Get your weekly dose of the must-read tech stories, news, and tutorials.
Follow us on Twitter š¦ and Facebook š„ and Instagram š· and join our Facebook and Linkedin Groups š¬





