Kubernetes face multiple challenges in addressing data, storage, and distribution of data for containerized apps in the cloud…specifically, how to merge stateful app data with stateless app functions so that containerized apps can operate at the Edge without any performance restrictions or scenario limitations.
While Kubernetes is proving to be the dominant player in orchestrating containerized apps, the technology lacks robust functionality for addressing data and storage needs and the distribution of data to and from containerized apps. This is where Kmesh comes in – by addressing the data management layer for containerized apps in the same way Kubernetes addresses the app management layer.
Listen to this podcast about delivering stateful application data to Kubernetes.
Bringing Stateful Characteristics to Stateless Kubernetes
Kubernetes can be limiting because they have no state. They can run functions and apps, but they lack the stateful data required to understand the context of each user interaction. This limits the interactivity and persistence of Kubernetes. To understand why overcoming these limitations is important, consider one of the most popular “Edge Computing” use cases – the connected car.
A connected car can send data to an edge compute node during travel, and as the car travels further, it can send additional data to different edge compute nodes. That data transfer from the car, to the edge, and eventually to the car maker for analysis is fine for a stateless app. But, if the car sends messages to different edge compute nodes as it is moving, and those messages need to be synchronized in real-time to understand car health now, stateless Kubernetes cannot help. This is where Kmesh adds value. Kmesh synchronizes the data at each edge compute node (plus data from other sources) and informs the Kubernetes-managed apps so that they can perform real-time functions. An important part of the Kmesh functionality is its storage-agnostic architecture. Kmesh is merely a software layer, so it can grab data from any type of storage, making all edge compute nodes easily connected.
Kmesh Delivers Lustre-Level Performance for Cloud Apps
If a data mobility service like Kmesh is to succeed in an edge computing environment, it must offer the performance required by compute-intensive applications. For this reason, Kmesh was originally designed as Lustre-as-a-service for the cloud. Lustre is, of course, the file system of choice for most of the world’s supercomputers.
By basing the Kmesh software on Lustre, the service brings parallel file system technology for high throughput and low latency to work in edge environments. The Kmesh software resides within each network node, where it performs superfast reading and writing of data based on a system dictated by metadata. This means it can interact with an underlying files system and be used to synch data across nodes at speeds 30 times faster than EFS. And remember, it can do this across multiple clouds and multiple cloud service providers.
Why Latency Does Not Impact Kmesh across Multiple Networks
Centralized data lakes held data, but today, a transformation is taking place from central data lakes to distributed data ponds. That’s because data can no longer stay centralized for modern apps and edge apps to function as promised.
Kmesh makes distributed data ponds merely appear as a centralized data lake. What actually occurs in the background is that Kmesh makes a single global namespace for all the distributed data. It looks like a centralized data lake, but it is not duplicated. Metadata makes it happen, and metadata dictates how to move data around. The SaaS portion of Kmesh is a user dashboard where enterprise IT establishes policies about where data resides and to point users to certain data locations.
How Current Kubernetes Users Leverage Kmesh