This article was originally featured in Channel Futures
As cloud momentum storms ahead, 2020 will see the industry introduce new related platforms and software, while developing a more standard approach to usage. We can also expect service mesh and serverless computing technologies to take off in exciting directions.
Containers will become the de-facto software packaging model, with application modernization taking an accelerated path
Container platforms have become an essential factor in the hybrid cloud landscape, accelerating multi-cloud adoption in enterprises. According to a Portworx and Aqua Security survey, over 87% of respondents said they were now running container technologies, a remarkable increase from just over half (55%) two years ago.
At the same time, existing networks and tools continue to work unabated. As cloud adoption picks up speed, the current era of coexistence between traditional and emerging technologies will face increasing operational business challenges that must be addressed through a continuous transformation process.
In the upcoming Project Pacific (the next release of VMware), Kubernetes (K8S) will play a vital role. This new release will completely blur the line between what is virtualized and what is running in a container. These containerizing legacy applications help by having the same unified domain of containers and K8S. Although these are not cloud-native technologies, they are managed in the same way as part of the K8S workloads and deployed in the same fashion.
In 2020, developers and operations teams will adopt a new breed of containers: ClearContainers (or KataContainers), which could have a disruptive impact on virtual machine concepts that many DevOps teams are used to, partly because they are not sharing the Linux OS kernel.
Serverless computing will gain more momentum, as it’s brought into on-premise platforms like OpenShift, Google Anthos, and others via kNative
According to RightScale's 2018 State of the Cloud report, serverless was the fastest-growing cloud service model, with an annual growth rate of 75%. In 2020, we will see a move towards maturing best practices, security solutions, and tooling as more communications providers look to implement the technology.
In 2019, we saw the growth of platforms like OpenShift, Google Anthos, and kNative adapted to connect microservices and offer fully managed serverless workloads. These solutions, designed to address platform-as-a-service needs for developers, as well as hybrid cloud deployments for enterprises, are recognized as a way to standardize Kubernetes environments.
However, if pushed to make a prediction, I expect that AWS Outposts will likely be the chief disruptor of the industry in 2020 as they blur the lines between on-cloud and on-premise workloads and services. Organizations must look to other solutions beyond K8S to scale out. DevOps teams need the required on-premise infrastructure to run and manage serverless at scale, and most organizations cannot compete with AWS or other public cloud vendors on this front. The ability to have serverless on-premise remains a riddle to be solved.
ISTIO / service mesh will become a standard approach to run cloud-native apps and microservices
Cloud technologies are becoming part of every IT environment – from operations to orchestration and beyond. With this, hybrid ecosystems are increasingly becoming the norm – with traditional applications, cloud-native apps, and VNFs running together. In fact, according to CNCF, around 62% of companies are relying on cloud-native technologies for more than half of their new applications.
Running cloud-native applications at scale can be highly complex, especially when thousands of microservices are involved. This is why securing communications between microservices is crucial. However, controlling QoS and predictable performance is challenging, and monitoring, debugging, and observing metrics across thousands of microservices are also very difficult to achieve.
Here is where ISTIO, an open-source project from IBM, Google, and Lyft, comes in. It provides a single standard service mesh on top of any cloud-native application which runs in K8S. ISTIO routes traffic to and from containers and secures container to container communications while providing all the required metrics and monitoring interfaces. ISTIO does all of this using technology from Lyft called Envoy, a small, high-performance proxy installed in every K8S POD. Envoy is already a Cloud Native Computing Foundation (CNCF) contributed project and is the core of other implementations similar to ISTIO like AWS AppMesh.
The challenge now is to have the industry unite behind a single approach for service mesh and service mesh interfacing. ISTIO is just one server service mesh option, and each service mesh vendor behaves differently. Some companies like RH, IBM, VMware, and Google unite over ISTIO - others like AWS refuse.
To target this goal, the Service Mesh Initiative (SMI) was born, aiming to have an open standard that will allow multi-cloud open service mesh operation in the K8S domain. However, not all are running to adopt SMI, and it will be exciting to monitor and watch how the industry evolves.