logo

Using the Innovative Optical and Wireless Network (IOWN) reference cases for core, edge, and cloud deployments

Posted by Marbenz Antonio on September 19, 2022

Integrating Your Content Management With Your Marketing Automation | Magnolia Headless CMS

To create attractive use cases for AI Integrated Communications (AIC) and Cyber-Physical Systems, the Innovative Optical and Wireless Network (IOWN) Global Forum is creating the next generation of data-centric infrastructure overall photonics networks (APN) (CPS). In the age of the green transformation, the strategy of the IOWN Global Forum will have a significant impact on the media, entertainment, and telecom sectors, among others.

RDMA over APN and Area management security use case PoC

The goal of this post is to show how RDMA over APN makes it possible for use cases in the forum to communicate across great distances without encountering any difficulties. The area management security use case in CPS for huge sensor data aggregation and ingestion is illustrated in the following PoC based on the IOWN reference implementation model, with RDMA over APN technology to boost data velocity for AI-inference. This IOWN reference implementation model will assist AI/ML users in increasing end-to-end data processing speed by utilizing IOWN technology, even if the existing Open Data Hub framework does not contain a solution to boost data velocity at the ingestion step in the data pipeline. Using data processing units (DPUs) or RDMA-capable network interface cards (NICs), which are required for this PoC, OpenShift supports GPUdirect RDMA and RoCEv2 (RDMA over Converged Ethernet v2).

Figure 1:  RDMA over APN in the context of Area management Security use-case
Figure 1:  RDMA over APN in the context of Area management Security use-case

APN PoC-based 5G mobile front haul network

This PoC wants to show the advantages and viability of APN as a fronthaul solution. It is anticipated that the acceptance of mobile fronthaul over APN as a workable and promising solution would promote IOWN-related technologies and motivate service providers from all over the world to participate. The network’s ability to handle elastic load balancing and high availability services is seen in the following figure. The active and dynamic steering of radio access network (RAN) components, including the radio unit (RU) connections to a group of (virtual) distributed units (DUs), is referred to as elastic load balancing. This steering is based on the actual load. APNs’ ability to change wavelength lines dynamically enables them to reroute computing resources at the destination by the volume of traffic and other factors. The ability for service providers to dynamically allocate and deallocate DU computing resources can thus be achieved with the help of APNs. In phase 2 of the PoC, the forum focuses on the vDU deployment on top of the logical service node, which may be running on the full COTS (Commercial Off-The-Shelf) server or the entire DPU/IPU (Infrastructure Processing Unit). Multiple vDU cloud-native network functions (CNFs) from our CNF-certified partners could be run on DU servers that are used as a logical service node and running OpenShift.

Figure 2: vDU mobile fronthaul network over APN
Figure 2: vDU mobile front haul network over APN

Data-Centric Infrastructure-as-a-Service PoC

The proof-of-concept for data-centric infrastructure as a service is last (DCIaaS). The IOWN technology’s DCIaaS basis allows for the deployment of the IOWN data hub (IDH) and the IOWN mobile network (IMN), as well as CPS and AIC deployment for 6G (under development), multi-access edge computing (MEC), and 5G RAN. The goal of the DCIaaS PoC is to show the benefits of the open all photonics network across customer sites, regional edge sites, and core sites by showcasing a concept of a logical service node (see figure 3), which is composed dynamically of allocatable hardware device resource pools (CPU, GPU, DPU/IPU, etc.) to realize each use case in CPS and AIC. On a bare metal host with an x86 CPU, OpenShift can create a Kubernetes base logical service node (or virtualization environment). In this DCIaaS PoC, OpenShift can run on the DPU as an upgraded logical service node in addition to an x86 CPU base logical service node.

Figure 3: OpenShift base Logical Service Nodes in Data-Centric Infrastructure
Figure 3: OpenShift base Logical Service Nodes in Data-Centric Infrastructure

Collaboration with Open Programmable Infrastructure project

It was discovered through forum activity that creating a network, security, or storage task requires a completely customizable open infrastructure paradigm across software stacks and DPU/IPU-like hardware devices. The Open Programmable Infrastructure (OPI) project aims to investigate and broaden the notion of programmable infrastructure using open ecosystems for next-generation architectures and frameworks based on DPU/IPU-like technologies that are standards-based, community-driven, and driven by community standards.

The original members, which included Red Hat, formed the OPI as a Linux Foundation project this summer. Both communities may benefit from the continuous cooperation between the IOWN Global Forum and the OPI project to achieve their shared objectives, with Red Hat and other ecosystem partners serving as catalysts for technological advancement and the investigation of new use cases.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Verified by MonsterInsights