Edge clouds are adaptable and expandable. Unlike actual hardware devices., edge cloud can handle abrupt workload spikes, user activity spikes, and user session rushes due to its software-driven, cloud-native, and AI/ML-based service assurance. Edge cloud service platforms employ CI/CD pipelines to expand when testing, validating, and deploying new apps or network services for companies, improving operating expenses and reducing CAPex for hardware resource usage.
Telco edge is inherently well suited for 5G mobility since it can readily scale vertically, horizontally, and across numerous telco edge locations, as illustrated by Figure 1. But we should not dismiss the possibility that wireline use cases such as fiber and DOCSIS might benefit from edge-based services, notably, low-latency financial connection services or residential services for competitive gaming leveraging copper or passive Optical Network.
Management, service delivery, and coordination across edge sites remain a daunting challenge.
How does end-to-end service orchestration impact edge cloud?
Without orchestration/automation, optimizing network resource management, service experience, and performance/reliability through edge computing and edge cloud wouldn’t be possible.
End-to-end service orchestration (E2ESO) facilitates cross-domain resource management, automation, and chaining. Edge orchestration refers to a platform that manages, automates, and coordinates the flow and arrangement of resources across various kinds of devices (MECs, COTS), infrastructure (fibre, datacenter), and transport domains at the network's edge.
In this context of multi-domain or end-to-end service orchestration, an edge orchestrator is only a resource management tool for managing resources at the network's edge. This function may already exist at lower levels and be granted orchestration and management authority by the central nervous system of the E2ESO architecture. Such cloud-native orchestration and configuration management technologies already have a well-known name: Kubernetes, or Kubernetes and Ansible when used as a pair. Kubernetes might be utilized as an xNF orchestration platform capable of managing virtualized (VNFs), Cloud-Native (CNFs), and even physical (PNFs) network functions that are supported by a management framework such as StarlingX.
Edge orchestrators might be strategically centralized or geographically distributed. Nonetheless, it makes more sense to see service intelligence as being distributed just as the applications and network services it is intended to support, monitor, and manage. The central E2ESO would be responsible for updating, monitoring, provisioning, and controlling the edge orchestrators in the event of a distributed edge orchestration approach. Yet, edge orchestrators must maintain autonomy and the ability to make decisions if the central nervous/brain orchestration becomes unreliable or inaccessible, which would require edge assurance.
Edge cloud and orchestration capabilities aim to make networks more intelligent and responsive so they can manage near-real-time (10ms to 500ms) or real-time events (10ms and below), traffic, service degradation, scaling, services/functions relocation, and any other needs at the edge. This will increase resource deployment efficiency and allow near-instantaneous service provision via service assurance, data intelligence, AI&ML, and evolving towards zero-touch and intent-based networks.
How much improvement can we expect from edge cloud and related solutions?
There is a natural market hype around milliseconds latency use cases, which overpromise outcomes for solutions such as augmented and virtual reality, autonomous vehicles, and drones, and IoT for mission-critical applications. The truth is that edge cloud and associated compute capabilities would boost responsiveness by increasing both throughput and lowering latency, but other variables would always contribute significantly to it. Key developments include the virtualization of the radio access network (RAN) using vRAN or Open RAN, backed by open interfaces such as Fronthaul and Midhaul. However, one shouldn't neglect the need for FPGA-accelerated cards, which support the notion that devices can conduct machine learning and AI at the edge as opposed to depending only on the central cloud which may result in reduced latency, consistent availability, lower costs, and assist with security.
Wireless workloads that might accelerate 5G/4G base station network tasks, such as vDU/O-DU or vCU/O-CU, are often highlighted as an example. It might aid faster workloads such as AI/ML on FPGA for inference applications such as RAN SMO (Service Management & Orchestration) and Near-Real-Time RAN Intelligent Controller (RIC).
Edge cloud and computing's main benefit is transmitting less data to the cloud for processing and storage. Gaming, industrial IoT, and OTT/telco based CDNs need this. Telecoms save money by lowering backhaul use. As organizations may not have to pay the same for connectivity and cloud resources regardless of data journey distance, edge-based services may become more valuable.
Can we drive edge-based solutions forward without AI/ML based service assurance?
Multi-Domain or End-to-End Service assurance (MDSA or E2ESA) in telecommunications is one of the most important application areas, especially for analytics, automation, data collection, and AI-based considerations. Service assurance provides critical incentives for self-healing/Zero-touch/Intent-based networks, improves operational capacities and churn, and naturally fosters dynamic and proactive monitoring, observability, and remediation across domains.
Operators and hyperscalers should become more interested in edge assurance in the coming years. Edge assurance has several implications. Edge assurance allows operators to see across domains and edge footprints. Hence, more distributed, more granular edge cloud-based with hybrid composition of services and xNF devices (any NFs) from physical, virtual, or cloud native functions would be proactively monitored to deliver high-quality experience, self-healing, and intent-oriented services. Hence, this would call for local and ephemeral data storage, as well as locally generated data analytics to provide a faster closed-loop with the edge orchestration layer.
Operators' increasing adoption of CNCF-oriented cloud native principles, tools, and methodologies and cross-domain automation capabilities drive this need. Operators' increasing experimentation and buzz around new capabilities such as vRAN, Open RAN, Mobile Private Networks, and Network slicing display their ambitions to play a major role in edge cloud and edge computing, likely with hyperscalers.
Enterprise adoption of public and hybrid clouds has increased tremendously, thus telcos who desire to provide edge services should pay close attention to the lessons cloud providers have learned from their success in this area. As end-to-end solutions grow more volatile, dispersed, and disaggregated with many edge locations, multi-cloud services, and varied last-mile connectivity technologies, CSPs and businesses must assure an agile solution posture.
Notwithstanding some indications, telcos should not see hyperscalers as rivals; rather, they should grasp the chance to build winning recipes, successful tales, and novel, fascinating solutions.
However, telcos and edge cloud ambitions must be approached with cross-domain and even cross-realm implications. This requires a true partner who is willing to share the risks while bringing the necessary ingredients to ensure both vision and success.
to discover how do we collaborate with hyperscalers, build true automated networks across realms and domains and help telcos from vision to execution.