Description: Edge computing requires multitasking workloads at the edge compute site in order to reduce communication latency, power, and real estate. As some of the workloads at the customer premises internet of things devices can leverage GPU functions for video processing, further analytics requires an open and scalable network platform for accelerated AI workloads at the service provider edge and even further analysis at a centralized data center platform. In this session, Lanner will partner with Tensor Network to discuss how NVIDIA AI can be structured in a networked approach where AI workloads can be distributed within the edge networks. We will start from the NVIDIA AI-accelerated customer premises equipment over the aggregated network edge and to the hyper-converged platform deployed at the centralized data center.

By leveraging our 30 years expertise in IT network computing, we provide true whitebox solutions which meet most of the specifications that customers are looking for, as well as certified with WiFi and LTE that can be shipped to many major countries globally.

© 2021 Lanner Electronics Inc. All Rights Reserved.