Why Kubernetes Holds the Key to the Future of the Cloud

Why Kubernetes Holds the Key to the Future of the Cloud

Written by Sunil Chavan, Senior Director, Solution Sales, Asia Pacific, Hitachi Data Systems

This year has been a busy year. The increasing pace of technological change has significantly increased the pressure on businesses and their IT divisions. Nonetheless, the emergence of new technologies provides benefits in the form of access to new capabilities that can transform a company's competitive advantage. One of them is Kubernetes.

Before we go any deeper, let's take a little look at the history of IT

Print history

Over the past two decades, we have witnessed enormous technological changes. The server distribution paradigm developed into a web-based architecture, which then developed into a service-oriented paradigm before finally moving to the cloud.

The cloud revolution, driven by virtualization and widespread adoption, has transformed the modern data center. But it doesn't stop there. In fact, this unchecked proliferation has created some of the same challenges that cloud architecture seeks to address. Such as the area of ​​land and the high cost required to maintain server storage.

The machine may be virtual or virtual, but it's still a day-to-day job of setting it up. So businesses strive to provide more flexible and cost-effective ways to build, develop and manage all applications. Which then sparked interest in another new idea that is still relatively new and exciting... placeholders!

Placeholders Help Enterprises Get Greater Benefits from Their IT

Containers serve the same function as other virtual machines, providing additional places for applications to run. The biggest difference is that container technology can run applications using only a fraction of the compute footprint compared to that required by virtual machines. This is because the container technology does not need to run a complete example or image of an operating system, with all the attendant kernels, drivers and libraries.

What's more, besides taking up only a fraction of the space, additional containers can literally be deployed within one-thousandth of a second, while virtual machines can take minutes or even longer.

It really works. For example, Google uses containers – more than two billion every week to run its cloud services. Many of Google's popular services such as Gmail, Search, Apps and Maps use internal containers that operate on Kubernetes, an open source container cluster management framework created by Google itself in 2014.

Kubernetes works closely with Docker, one of the providers that has made containers popular in the cloud world. While Docker provides lifecycle management for containers, Kubernetes then takes container technology to the next level by providing setup techniques and the ability to manage clusters.

HDS Is Part of the Container

Google doesn't walk alone. In July (2015), Hitachi declared its support for Kubernetes in an infrastructure solution centered on the Hitachi Unified Compute Platform (UCP). This is good news for customers as it means that there will be a proven business-grade private cloud infrastructure available to both developers and customers. It can also assist developers and customers in setting up and running container-based applications with a full microservices architecture.

Kubernetes and VMware working side-by-side on converged platforms, such as UCP, offer enterprises solutions for both container-based applications and traditional virtualized workloads.

One of the biggest benefits of having a container management managed by Kubernetes is being able to manage and allocate resources on a host/cluster dynamically with fault tolerance to ensure workload reliability. Kubernetes allows definitions on resources and labels on nodes, allowing the user to choose and control where the resource is intended to run.

Labeling also allows the use of Pods at different tiers or hardware configurations. For example, a set of production nodes with higher device labels will allow Kubernetes to select and manipulate the pods and services associated with the labels. This enables Kubernetes to divide its workload by label to ensure all resources are used based on the needs of its users.

UCP and Kubernetes, An Interesting Combination

The combination of UCP and Kubernetes container management offers customers several benefits, including simpler management of physical and virtual infrastructure with automated setup. This combination also offers the possibility to scale based on workload requirements and can easily deploy Kubernetes container clusters to new environments.

UCP can be easily scaled from 12 to 128 nodes to provide Kubernetes with fast capacity growth to schedule nodes and manage accommodated workloads. Kubernetes will manage deployment, scale and monitor hosted services' and simultaneously run alongside the same platform as virtual workbenches and bare metal.

Kubernetes is good news for the developer community and IT administrators working to speed up application deployment. And everything will get better. HDS has considered advances and new features for this solution including hybrid configuration with GKE and AWS cloud services, streamlined and 100% automated Kubernetes cluster management within the UCP and a unified repository registry.

It seems like once every 5 years or so, the IT industry witnesses a major technology shift. With so many applications competing for I/O resources, I believe that in the next year or so, converged solutions combined with Kubernetes will be seen as a viable “virtual alternative” to legacy systems that weren't developed with due attention to containers. .

And that makes perfect sense!

Are you sure to continue this transaction?
Yes
No
processing your transactions....
Transaction Failed
try Again