The need for remote work solutions prioritized a shift to the cloud – and for some organizations, that meant choosing short-term or patchwork solutions jerry-rigged out of existing legacy technologies just to get through the crisis. However, IT professionals recognize that application effectiveness in this new multi-cloud environment requires a redesign of their enterprise applications from the ground up.
Now as the dust begins to settle, many companies are looking at longer-term solutions for cloud operations. Kubernetes, which virtualizes cloud infrastructure, allowing customers to control where their applications and data will live without constraints, is the center of most refitting strategies.
Why the need for a new infrastructure?
Organizations move to Kubernetes containerized applications to free themselves from lock-in, to simplify IT management, to increase IT agility, and to take advantage of IoT, big data, and AI/ML. In fact, adoption of Kubernetes among large enterprises grew from 27% in 2018 to 48% in 2020. Gartner predicts that by 2022, more than 75% of global organizations will be running containerized applications in production.
Where legacy tech and other solutions are failing
Operating in the cloud changes everything. Kubernetes is now the operating system of the cloud. Its powerful orchestration and automation require rethinking every element of the cloud tech stack: security, storage, compute, application design, testing, deployment, and more. Just about everything that’s been built over the last 50 years of static, private data center IT must be questioned.
The cloud requires a reinvention of the underlying systems built to store, protect, move, replicate and distribute data. Here’s one example: Kubernetes automates container operations, eliminating difficult and time-consuming manual processes that correspond with application development and scaling. This is all good, but in doing so, Kubernetes has the ability to kill and restart any service – including nodes, containers, app instances, and even whole clusters. When a service is killed, its data is lost. This wasn’t a problem when Kubernetes was supporting stateless applications, but enterprise applications are often stateful, e.g., transaction processing. Data loss here could be catastrophic. Thankfully, container-native storage solutions offer data persistence, ensuring stateful data remains available even if the service accessing it restarts.
But Kubernetes presents yet another issue for enterprise data. Applications move far more easily than the data; applications are lightweight and small compared to large databases and data lakes. Moving large data stores takes time and bandwidth. And moving an application without its accompanying data isn’t productive; unlike apps, transporting data takes hours or days. That means the agility promised by Kubenetes is lost. So yes, Kubernetes adoption is rising rapidly, but data gravity creates friction that delays its widespread adoption for enterprise applications.
What’s needed is a data services platform designed from the ground up to support Kubernetes, one that’s container-native and microservices-based to allow the underlying data as much freedom as the new cloud applications already enjoy.
The data mobility promise
Container-native data services enable enterprises to fully leverage the power of the cloud for both application and data agility. This is data storage and management reinvented.
Data storage is being transformed today so that it’s now possible to store data by name rather than location, enabling effortless data mobility. Data sets of any size can be moved any distance in seconds, not hours or days. Moreover, storing the time of writes in this new metadata model allows data to be instantly restored to any previous point in time without complicated backups or snapshots. Users/developers simply request data at a specific location or time, and it’s delivered instantly. Some have equated this to the invention of a URL for enterprise data.
Transformation on the horizon
As legacy applications are being phased out, a new type of infrastructure is needed to deal with data in a world where Kubernetes has become the standard. Containers are attractive, in part, because they give organizations the run-anywhere, dynamic adaptability and flexibility they need to use whichever vendors they want and to operate well in the cloud.
Yet managing data in the cloud is profoundly different from managing data in a data center. The tools and approaches used in the past are no longer relevant. Today’s new container-native data technologies will accelerate your organization’s transformation by transforming your data infrastructure from the ground up.
Written by Mike Wall.
Add CEOWORLD magazine to your Google News feed.
Follow CEOWORLD magazine headlines on: Google News, LinkedIn, Twitter, and Facebook.
Thank you for supporting our journalism. Subscribe here.
For media queries, please contact: email@example.com