Description: Edge computing requires multitasking workloads at the edge compute site in order to reduce communication latency, power, and real estate. As some of the workloads at the customer premises internet of things devices can leverage GPU functions for video processing, further analytics requires an open and scalable network platform for accelerated AI workloads at the service provider edge and even further analysis at a centralized data center platform. In this session, Lanner will partner with Tensor Network to discuss how NVIDIA AI can be structured in a networked approach where AI workloads can be distributed within the edge networks. We will start from the NVIDIA AI-accelerated customer premises equipment over the aggregated network edge and to the hyper-converged platform deployed at the centralized data center.