The accelerating deployment of powerful AI solutions in competitive markets has evolved hardware requirements down to the very edge of our network due to eruption in AI-based products and services. For edge AI workloads, efficient and high-throughput inference depends on a well-curated compute platform. Advanced AI applications now face fundamental deep learning inference challenges in latency, reliability, multi-precision artificial neural networks support and solution delivery. NGC software runs on a wide variety of edge-to-cloud GPU servers, and Lanner’s edge AI appliance, LEC-2290E, optimized for NVIDIA® T4 have passed an extensive suite of tests that validate its ability to deliver high-volume, low-latency inference using NVIDIA GPU and NGC software components such as TensorRT, TensorRT Inference Server, DeepStream, CUDA toolkit, and various NGC-supported deep learning frameworks.

By leveraging our 30 years expertise in IT network computing, we provide true whitebox solutions which meet most of the specifications that customers are looking for, as well as certified with WiFi and LTE that can be shipped to many major countries globally.

© 2021 Lanner Electronics Inc. All Rights Reserved.