Microsoft announced in early November that it will release the preview of Mesh for Microsoft Teams (henceforth referred to as simply “Mesh”) in 1H22 as a chat and collaborative platform for the metaverse. By providing a virtual meeting space, in which Teams users can conduct meetings, chat, work collaboratively, and share documents, Mesh is set to become an entrance to the metaverse.
Community interactions will serve as a starting point for metaverse development
Microsoft first unveiled Microsoft Mesh during its Ignite 2021 event in March. This platform supports applications including HoloLens Mesh and Altspace VR, with more Microsoft Teams services to be released in the future. By announcing ahead of time that the preview version of Mesh will be released in 2022, Microsoft is hoping to leverage the recent emergence of topics related to the metaverse in order to increase its customers’ engagement with the new functionalities of Mesh. Hence, the company is positioning Mesh as an entrance into the metaverse by first attracting users through functions such as teleconferencing, collaboration, and chat. Microsoft will then gradually add to the number of applications and services in the virtual reality, thereby eventually constructing a complete virtual world.
Judging from the current progress of development, TrendForce believes that social communities, teleconferencing, and virtual meetings will become AR/VR applications most attractive to consumers. That is also why companies currently developing AR/VR solutions regard these applications as the starting point of metaverse development. These applications’ trending importance can primarily be attributed to the two reasons of demand and supply. Regarding the demand side, not only has the emergence of the COVID-19 pandemic brought about significant growths in teleconferencing and remote interaction usages, but there has also been a gradual change in how people interact in internet-based communities. More specifically, this change refers to a shift in interactions from texts, images, and videos to virtual avatars. As a result, the consumer market is expected to have a relatively high acceptance for AR/VR-driven community interactions and teleconferences. Regarding the supply side, on the other hand, service providers that operate social media and teleconference platforms drastically differ from the typical hardware brands in terms of product strategy, since these providers generally aim to first create a massive user base rather than deriving profits from a single product. As such, these providers are comparatively more willing to invest massive resources into expanding their presence in the market during the initial phase even though doing so may potentially incur financial losses.
R&D and release of device hardware will become the most significant challenge for platform service providers
For Microsoft, Mesh represents a starting point, not only towards the development of the metaverse, but also one that requires investment in more areas, since the metaverse requires the realization of a virtual world that is more immersive and lifelike. Apart from Microsoft’s existing competencies in cloud services and OS software, the company still needs to achieve a sense of realism in the virtual avatars and interactions that it creates, and these creations need to reflect changes made by the user. For instance, the mouth and facial expressions of virtual avatars need to be able to instantly adapt as their users speak, and this process involves not only software adjustments, but also the integration of sensors and other hardware devices. As long as hardware brands require that their individual products remain profitable, Microsoft will find it difficult to hand over the responsibility of hardware-related technological R&D and product releases to the hardware brands. Unless Microsoft is willing to provide sufficiently high subsidies and absorb all financial losses, it will inevitably release its self-designed consumer AR/VR devices – for the same reason that Meta (also known as Facebook) acquired Oculus, and ByteDance acquired Pico. On the other hand, crossing over to the hardware market represents entering an industry that is yet to mature and that requires investment into multiple technologies. Platform service providers will therefore need to invest more resources into hardware development, and this remains one of the challenges Microsoft faces after entering the metaverse.
（Image credit: Pixabay）
The growth of the metaverse will drive an increasing number of companies to participate in the build-out of the virtual world, with use cases such as social communities, gaming/entertainment, content creation, virtual economy, and industrial applications all becoming important points of focus in the coming years, according to TrendForce’s latest investigations. Apart from increases in both computing power of semiconductors and coverage of low-latency, high-speed networks, the metavere’s development will also depend on the adoption of AR/VR devices by end users. TrendForce expects global AR/VR device shipment for 2022 to reach 12.02 million units, a 26.4% YoY increase, with Oculus and Microsoft each taking leadership position in the consumer and commercial markets, respectively.
TrendForce further indicates that the success of AR/VR devices in the consumer and commercial markets will be determined by their retail prices and degree of system integration, respectively, while these two factors are also responsible for leading companies’ continued competitive advantages. However, gross and net profit considerations regarding AR/VR hardware have made it difficult to not only price these devices competitively, but also increase the volume of AR/VR device shipment.
Even so, the growing popularity of the metaverse will drive more and more hardware brands to enter the AR/VR market and push online service platform providers to either directly or indirectly propel the growth of the hardware market in 2022. Regarding the consumer market, AR/VR device suppliers may look to expand their user base and increase their market penetration via low-priced yet high-spec devices, while compensating for their reduced hardware profitability through software sales. Oculus, for instance, has adopted such a strategy to maintain its advantage in the market, thereby raising the market share of the Oculus Quest products to a forecasted 66% next year.
Regarding the commercial market, there has been a growth in applications ranging from remote interactions and virtual collaborations to digital twins; hence, enterprises have become increasingly willing to adopt AR/VR devices. Compared to the consumer market, which is mainly driven by products with low prices and high specs, the commercial market is comprised of enterprises that are more willing to choose high-priced and high-performance products, although such products must be paired with a full system integration solution or customized services. Possessing substantial competency in the industrial ecosystem, Microsoft enjoys a relatively large competitive advantage in the commercial market, as the company’s HoloLens 2 became one of the few commercial AR devices with an annual shipment exceeding 200,000 units this year.
It should also be pointed out that, given the rapid advancements in high-speed 5G networks, video-based remote assistance applications enabled by low-priced AR glasses and 5G smartphones’ computing and networking functions will become yet another commercial AR/VR use case. TrendForce believes that these applications can serve as a low-cost, easily deployable early trial that will not only raise enterprises’ willingness to adopt more AR/VR commercial applications going forward, but also accelerate the development of commercial services related to the metaverse.
According to TrendForce’s latest report on the server industry, not only have emerging applications in recent years accelerated the pace of AI and HPC development, but the complexity of models built from machine learning applications and inferences that involve increasingly sophisticated calculations has also undergone a corresponding growth as well, resulting in more data to be processed. While users are confronted with an ever-growing volume of data along with constraints placed by existing hardware, they must make tradeoffs among performance, memory capacity, latency, and cost. HBM (High Bandwidth Memory) and CXL (Compute Express Link) have thus emerged in response to the aforementioned conundrum. In terms of functionality, HBM is a new type of DRAM that addresses more diverse and complex computational needs via its high I/O speeds, whereas CXL is an interconnect standard that allows different processors, or xPUs, to more easily share the same memory resources.
HBM breaks through bandwidth limitations of traditional DRAM solutions through vertical stacking of DRAM dies
Memory suppliers developed HBM in order to be free from the previous bandwidth constraints posed by traditional memory solutions. Regarding memory architecture, HBM consists of a base logic die with DRAM dies vertically stacked on top of the logic die. The 3D-stacked DRAM dies are interconnected with TSV and microbumps, thereby enabling HBM’s high-bandwidth design. The mainstream HBM memory stacks involve four or eight DRAM die layers, which are referred to as “4-hi” or “8-hi”, respectively. Notably, the latest HBM product currently in mass production is HBM2e. This generation of HBM contains four or eight layers of 16Gb DRAM dies, resulting in a memory capacity of 8GB or 16GB per single HBM stack, respectively, with a bandwidth of 410-460GB/s. Samples of the next generation of HBM products, named HBM3, have already been submitted to relevant organizations for validation, and these products will likely enter mass production in 2022.
TrendForce’s investigations indicate that HBM comprises less than 1% of total DRAM bit demand for 2021 primarily because of two reasons. First, the vast majority of consumer applications have yet to adopt HBM due to cost considerations. Second, the server industry allocates less than 1% of its hardware to AI applications; more specifically, servers that are equipped with AI accelerators account for less than 1% of all servers currently in use, not to mention the fact that most AI accelerators still use GDDR5(x) and GDDR6 memories, as opposed to HBM, to support their data processing needs.
Although HBM currently remains in the developmental phase, as applications become increasingly reliant on AI usage (more precise AI needs to be supported by more complex models), computing hardware will then require the integration of HBM to operate these applications effectively. In particular, FPGA and ASIC represent the two hardware categories that are most closely related to AI development, with Intel’s Stratix and Agilex-M as well as Xilinx’s Versal HBM being examples of FPGA with onboard HBM. Regarding ASIC, on the other hand, most CSPs are gradually adopting their own self-designed ASICs, such Google’s TPU, Tencent’s Enflame DTU, and Baidu’s Kunlun – all of which are equipped with HBM – for AI deployments. In addition, Intel will also release a high-end version of its Sapphire Rapids server CPU equipped with HBM by the end of 2022. Taking these developments into account, TrendForce believes that an increasing number of HBM applications will emerge going forward due to HBM’s critical role in overcoming hardware-related bottlenecks in AI development.
A new memory standard born out of demand from high-speed computing, CXL will be more effective in integrating resources of whole system
Evolved from PCIe Gen5, CXL is a memory standard that provides high-speed and low-latency interconnections between the CPU and other accelerators such as the GPU and FPGA. It enables memory virtualization so that different devices can share the same memory pool, thereby raising the performance of a whole computer system while reducing its cost. Hence, CXL can effectively deal with the heavy workloads related to AI and HPC applications.
CXL is just one of several interconnection technologies that feature memory sharing. Other examples that are also in the market include NVLink from NVIDIA and Gen-Z from AMD and Xilinx. Their existence is an indication that the major ICT vendors are increasingly attentive to the integration of various resources within a computer system. TrendForce currently believes that CXL will come out on top in the competition mainly because it is introduced and promoted by Intel, which has an enormous advantage with respect to the market share for CPUs. With Intel’s support in the area of processors, CXL advocates and hardware providers that back the standard will be effective in organizing themselves into a supply chain for the related solutions. The major ICT companies that have in turn joined the CXL Consortium include AMD, ARM, NVIDIA, Google, Microsoft, Facebook (Meta), Alibaba, and Dell. All in all, CXL appears to be the most favored among memory protocols.
The consolidation of memory resources among the CPU and other devices can reduce communication latency and boost the computing performance needed for AI and HPC applications. For this reason, Intel will provide CXL support for its next-generation server CPU Sapphire Rapids. Likewise, memory suppliers have also incorporated CXL support into their respective product roadmaps. Samsung has announced that it will be launching CXL-supported DDR5 DRAM modules that will further expand server memory capacity so as to meet the enormous resource demand of AI computing. There is also a chance that CXL support will be extended to NAND Flash solutions in the future, thus benefiting the development of both types of memory products.
Synergy between HBM and CXL will contribute significantly to AI development; their visibility will increase across different applications starting in 2023
TrendForce believes that the market penetration rate of CXL will rise going forward as this interface standard is built into more and more CPUs. Also, the combination of HBM and CXL will be increasingly visible in the future hardware designs of AI servers. In the case of HBM, it will contribute to a further ramp-up of data processing speed by increasing the memory bandwidth of the CPU or the accelerator. As for CXL, it will enable high-speed interconnections among CPU and other devices. By working together, HBM and CXL will raise computing power and thereby expedite the development of AI applications.
The latest advances in memory pooling and sharing will help overcome the current hardware bottlenecks in the designs of different AI models and continue the trend of more sophisticated architectures. TrendForce anticipates that the adoption rate of CXL-supported Sapphire Rapids processors will reach a certain level, and memory suppliers will also have put their HBM3 products and their CXL-supported DRAM and SSD products into mass production. Hence, examples of HBM-CXL synergy in different applications will become increasingly visible from 2023 onward.
For more information on reports and market data from TrendForce’s Department of Semiconductor Research, please click here, or email Ms. Latte Chung from the Sales Department at email@example.com