Nvidia hosted its fall GTC (GPU Technology Conference) in early November, during which the company shared details regarding the progress that it had made on products and services such as AI software, data centers, automotive applications, and healthcare. In particular, Nvidia’s foray into virtual worlds and digital twins, both of which are closely tied to the metaverse, garnered significant attention from the public. By leveraging diverse simulation tools that reflect real-life circumstances, Nvidia has extended the application of virtual worlds from the local scale to the planetary scale, thereby reflecting the metaverse’s pioneering qualities and developmental progress.
Along with the ongoing metaverse craze, Nvidia also released its Omniverse Avatar technology platform as well as its Omniverse Replicator, which is a “synthetic data-generation engine” according to the company. Both of these releases are based on the Nvidia Omniverse, a platform that specializes in virtual collaboration. Whereas the Omniverse Avatar platform enables the creation of interactive virtual characters through synergies among voice AI technology, machine vision, and NLP (natural language processing), the Omniverse Replicator constructs more realistic, lifelike virtual worlds by training DNN (deep neural networks) using such synthetic data as velocity, depth, and weather conditions.
Digital twin-based virtual factories are starting to show the first hints of the metaverse
The metaverse value chain primarily revolves around commonly seen infrastructural backbones formed by telecommunications and cloud/edge computing. The virtual space that is then built on top of this infrastructure comprises HMI (human machine interface), decentralization, application creation, and user experiences. More specifically, HMI produces an AI-empowered immersive experience by combining multiple interactive technologies with an AR/VR base layer. At the moment, companies such as Nvida, Meta (formerly known as Facebook), Microsoft (including Xbox), and Vive are heavily invested in HMI development. Application creation, on the other hand, refers to mechanisms that make the metaverse more lively, reliable, diverse, and attractive. Some examples include graphical tools and cryptocurrency technologies. Representative groups focusing on this field include Roblox, IBM, Google AI, Epic, and Unity.
Regarding the content of Nvidia’s presentation during GTC apart from the Omniverse Avatar and Replicator, the company also released CloudXR, Showroom, and other Omniverse-based tools used for optimizing immersive experiences. As well, Nvidia also released the Modulus neural network model, which is accelerates the build-out of digital twins. These releases, in turn, demonstrates Nvidia’s competency and leadership in creating AI-driven software tools for the metaverse value chain. With regards to real-life use cases, digital twins currently represent most of Nvidia’s applications. For instance, BMW and Nvidia have partnered to construct a digital twin-based factory via the Omniverse platform capable of connecting ERP (enterprise resource management), shipment volume simulation, remote-controlled robots, production line simulation, etc. This partnership is indicative of promising early-stage growth of the metaverse.
Nvidia is extending its simulation application from factories to planets
While smart city development has remained one of the main use cases of simulation in recent years, Nvidia has further extended its simulation applications from use cases previously limited to singular offices or factory facilities. For instance, BIM (building information modeling) specialist Bentley Systems has teamed up with Nvidia to apply digital twins to public property management and maintenance. Ericsson, on the other hand, is utilizing Nvidia’s technology to construct a digital replica of an entire city for the purpose of checking 5G signal coverages, optimizing base station placement, and improving antenna designs. During the GTC, Nvidia unveiled the Earth-2 system, which is a supercomputer that generates a digital twin of planet earth for weather forecasts.
As a matter of fact, most products and services announced by Nvidia during GTC represent either a partial or entry-level application of the metaverse. However, as the post-pandemic new normal continues to drive up the demand for contactless and digital transformation applications, strengthening CPS (cyber physical systems) will remain one of the most significant trends in the market. As real-world environments become increasingly complex due to interactions among an increasing number of tools and use cases, Nvidia will aim to create a comprehensive framework for metaverse development through products/services based on more intelligent, comprehensive, and instant virtual worlds. Hence, TrendForce believes that Nvidia will need to address certain major challenges going forward, including lowering its tools’ usage barriers, strengthening its ecosystem, and attracting new users.