www.magazine-industry-usa.com
30
'21
Written on Modified on
Lanner Electronics
Lanner Joins NVIDIA GTC 21 to Showcase NGC-ready Edge AI Platforms for Intelligent Networking, Manufacturing and Transportation
Join Lanner at NVIDIA’s GTC for a transformative global event that brings together brilliant, creative minds looking to ignite ideas, build new skills, and forge new connections to take on our biggest challenges.
It all comes together online April 12 - 16, kicking off with NVIDIA CEO and Founder Jensen Huang’s keynote.
At GTC, Lanner will discuss how AI can be structured in a networked approach where AI workloads can be distributed within the edge networks. We will start from the NVIDIA AI-accelerated customer premise equipment over the aggregated network edge, to the hyper-converged platform deployed at the centralized datacenter.
At Lanner’s GTC sessions you can explore below topics:
AI-Powered Hyper-Converged MEC Server Enables Intelligent Transportation Services
All innovative fleet services generate massive volumes of data, and that's driven fleet management companies' data centers to be more agile by integrating compute, storage, and networking into one hyper-converged infrastructure, which simplifies and consolidates all the virtualization components through software. With its software-defined nature, the hyper-converged infrastructure leverages existing hardware storage while adopting a virtual controller to manage the physical devices. Lanner came up with the hyper-converged MEC server that seamlessly integrates high performance computing, massive storage, and networking functions into one single appliance. Powered by NVIDIA T4 Tensor Core GPU, the MEC server consolidates the taxi management tasks, such as emergency call services, video surveillance systems, and location-based services. Highest Storage Density for FX-3420 can record all driving and service data for customer analysis and demand forecasting.
Building Efficient and Intelligent Networks using Network Edge AI Platform
Edge computing requires multitasking workloads at the edge compute site in order to reduce communication latency, power, and real estate. As some of the workloads at the customer premises internet of things devices can leverage GPU functions for video processing, further analytics requires an open and scalable network platform for accelerated AI workloads at the service provider edge and even further analysis at a centralized data center platform. In this session, Lanner will partner with Tensor Network to discuss how NVIDIA AI can be structured in a networked approach where AI workloads can be distributed within the edge networks. We will start from the NVIDIA AI-accelerated customer premises equipment over the aggregated network edge and to the hyper-converged platform deployed at the centralized data center.
Edge AI Inference and NGC-Ready Server: A Hardware Perspective
The accelerating deployment of powerful AI solutions in competitive markets has evolved hardware requirements down to the very edge of our network due to eruption in AI-based products and services. For edge AI workloads, efficient & high-throughput inference depends on a well-curated compute platform. Advanced AI applications now face fundamental deep learning inference challenges in latency, reliability, multi-precision ANNs support & solution delivery. Designed and built in-house by Lanner for secure remote operation and accelerated workloads with the Tesla T4 Tensor core GPU, the LEC-2290E is validated and edge-ready out-of-the-box for streamlined NGC deployments. NVIDIA GPU CLOUD (NGC) fast-tracks edge AI solutions with its comprehensive catalog of containerized software GPU-optimized for edge-to-core solutions.
Don’t miss out on this amazing event. Registration is FREE and gives you access to all the live sessions, interactive panels, demos, research posters, and more.
Event details
Date: April 12 — April 16
GTC21 Registration Link: https://www.nvidia.com/en-us/gtc/?ncid=ref-spo-95198&sfdcid=undefined#cid=gtcs21_ref-spo_en-us