Press Releases
According to TrendForce research, corporate demand for digital transformation including artificial intelligence and high-performance computing has accelerated in recent years, which has led to increasing adoption of cloud computing. In order to improve service flexibility, the world’s major cloud service providers have gradually introduced ARM-based servers. The penetration rate of ARM architecture in data center servers is expected to reach 22% by 2025.
In the past few years, ARM architecture processors have matured in the fields of mobile terminals and Internet of Things but progress in the server field has been relatively slow. However, companies have diversified cloud workloads in recent years and the market has begun to pay attention to the benefits ARM architecture processing can provide to data centers. TrendForce believes that ARM-based processors have three major advantages. First, they can support diverse and rapidly changing workloads and are more scalability and cost-effective. Second, ARM-based processors provide higher customization for different niche markets with a more flexible ecosystem. Third, physical footprint is relatively small which meets the needs of today’s micro data centers.
Influenced by geopolitics and the strengthening of data sovereignty in various countries, major cloud service providers and telecom operators are actively developing micro data centers which will further drive the penetration of ARM-based processors. At the same time, from the perspective of cloud service providers currently adopting ARM-based processors, Graviton, led by AWS, has the largest market scale and began encroaching gradually into the market in 2021. TrendForce also observed that AWS’s deployment of ARM-based processors in 2021 reached 15% of overall server deployment and will exceed 20% in 2022. This forces other major cloud service providers to keep up by initiating their own projects at various foundries. If testing is successful, these projects are expected to start mass introduction in 2025.
In addition, according to the Neoverse Platform plan previously released by ARM, its Platform Roadmap will also be one of the key drivers of penetration. This product line is set up to target ultra-large-scale data centers and edge computing infrastructure. However, it is worth mentioning, since x86 is still mainstream in the market and ARM-based server CPU suppliers only maintain small-batch production orders at this stage and primarily focus on ultra-large-scale data centers, introduction of ARM-based servers into enterprise data centers will be slow going. Thus, TrendForce believes that it will still be difficult for ARM-based servers to compete with x86-based servers before 2025.
Press Releases
According to TrendForce research, due to the vigorous stocking of various terminal applications causing a shortage of wafers in 2021, the global IC industry was severely undersupplied. This, coupled with spiking chip prices, boosted the 2021 revenue of the global top ten IC design companies to US$127.4 billion, or 48% YoY.
TrendForce further indicates three major disparities from the 2020 ranking. First, NVIDIA surpassed Broadcom to take the second position. Second, Taiwanese companies Novatek and Realtek rose to sixth and eighth place, respectively. Originally ranked tenth, Dialog was replaced at this position by Himax after Dialog was acquired by IDM giant Renesas.
Qualcomm continues its reign as number one in the world, primarily due to 51% and 63% growth YoY in sales of mobile phone SoC (System on Chip) and IoT chips, respectively. The addition of diversified development in its RF and automotive chip businesses was key to a 51% increase in revenue. NVIDIA implemented the integration of software and hardware, demonstrating its ambitions in creating a “comprehensive computing platform.” Driven by the annual growth of gaming graphics card and data center revenue at 64% and 59%, respectively, NVIDIA successfully climbed to second place. Broadcom benefited from the stable sales performance of network chips, broadband communication chips, and storage and bridging chips, with revenue growing 18% YoY. AMD’s computer and graphics revenue grew by 45% YoY due to strong sales of the Ryzen CPU and Radeon GPU and rising average selling price. Coupled with accelerating demand from cloud companies, the annual revenue of AMD’s enterprise, embedded, and semi-customized divisions increased by 113%, driving annual growth of total revenue to 68%.
In terms of Taiwanese firms, MediaTek’s strategy of focusing on mobile phone SoC has produced miraculous results. Benefiting from an increase in 5G penetration, the sales performance of MediaTek’s mobile phone product portfolio surged by 93% and the company has committed to increasing the proportion of high-end product portfolios, resulting in 61% annual revenue growth. Novatek’s two major product lines of SoC and display driver IC have both grown significantly. Due to improved product specifications, increased shipments, and beneficial pricing gains, revenue grew by 79% YoY, the highest among the top ten. Realtek has been driven by strong demand for Netcom and commercial notebook products, while the performance of audio and Bluetooth chips remains quite stable, conferring an annual revenue growth of 43%. Himax joins the top ten ranking for the first time in 2021. Due to significant annual revenue growth in large-sized and medium/small-sized driver IC of 65% and 87%, respectively, and the successful introduction of driver IC into automotive panels, total revenue exceeded US$1.5 billion, or 74% YoY.
Looking forward to 2022, after AMD completes the acquisition of Xilinx, other players will fill out the rankings. In the broader picture, intensifying demand for high-specification products such as high-performance computing, Netcom, high-speed transmission, servers, automotive, and industrial applications will create good business opportunities for IC design companies and drive overall revenue growth. However, terminal system manufacturers face the correction of component mismatch issues. In addition, growing foundry costs, intensifying geopolitical conflicts, and rising inflation will all be detrimental to global economic growth and may impact an already weakened consumer electronics market. These are the challenges IC design companies face in 2022 and by what means can product sales momentum be maintained within existing production capacity, R&D efficacy strengthened, and chip specifications upgraded, will become the primary focus of development in 2022.
Press Releases
The semicondustor market in 3Q21 is red hot with total revenue of the global top 10 IC design (fabless) companies reaching US$33.7 billion or 45% growth YoY, according to TrendForce’s latest investigations. In addition to the Taiwanese companies MediaTek, Novatek, and Realtek already on the list, Himax comes in at number ten, bringing the total number of Taiwanese companies on the top 10 list to 4.
Qualcomm has been buoyed by continuing robust demand for 5G mobile phones form major mobile phone manufacturers with further revenue growth from its processor and radio frequency front end (RFFE) departments. Qualcomm’s IoT department benefited from strong demand in the consumer electronics, edge networking, and industrial sectors, posting revenue growth of 66% YoY, highest among Qualcomm departments. In turn, this drove Qualcomm’s total 3Q21 revenue to US$7.7 billion, 56% growth YoY, and ranking first in the world.
Second ranked NVidia, is still benefiting from gaming graphics card and data center revenue as the annual revenue growth for these two primary product departments reached 53% and 48%, respectively. In addition, professional design visualization solutions only accounted for 8% of total revenue. However, due to enduringly strong demand for mining and customers actively deploying the RTX series of high-performance graphics cards, NVidia’s product department revenue grew 148% YoY with overall revenue increasing by 55% to US$6.6 billion.
Third ranked Broadcom’s main revenue stream came from their network chip, broadband communication chip and storage and bridge chip businesses. Driven by post-COVID hybrid working models, companies are accelerating migration to the cloud, increasing demand for Broadcom chips, and driving revenue growth to US$5.4 billion or 17% YoY. AMD’s Ryzen, Radeon, and EPYC series of products in the fields of games, data centers, and servers performed well, driving total revenue to US$4.3 billion, 54% growth YoY, and fifth place overall.
In terms of Taiwanese companies, MediaTek continues to expand its global 5G rollout and, benefiting from optimization of product portfolio composition, product line specification enhancement, increase in sales volume, increases in pricing, and other factors, revenue of MediaTek’s mobile phone product line increased 72% YoY. Annual revenue of other product lines also posted double digit growth with total revenue in the 3Q21 reaching US$4.7 billion or 43% YoY, a fourth place ranking. Novatek continues to focus on its two primary product lines of system-on-chip and panel driver chips. The proportion of its OLED panel driver chip shipments has increased, product ASP has risen, and shipments have been smooth with 3Q21 revenue reaching US$1.4 billion or 84% YoY. In addition, Realtek’s revenue surpassed Xilinx to take the eighth position due to higher priced Netcom chips in 3Q21. Himax also saw significant growth in its three main product lines of TVs, monitors, and notebooks due to large-size driver chips. Revenue from large-size driver chips increased 111% YoY, driving total revenue to exceed the US$400 million mark, a 75% increase, and enough to squeeze onto this year’s ranking.
Overall, 3Q21 revenue for major IC design (fabless) companies has generally reached historic levels. Rankings for the top 7 companies remained the same as in 2Q21 with change coming in ranks 8 to 10. Looking forward to 4Q21, TrendForce believes Taiwanese IC design (fabless) companies will generally lean conservative. In addition to the electronics industry moving into the traditional off-season, a slowing of demand for consumer applications and customer-end materials supply issues reducing procurement will make continued revenue growth a challenge. In addition to consumer electronic products, global industry leaders are focused on the positive development of server and data center products to maintain an expected revenue growth trend.
For more information on reports and market data from TrendForce’s Department of Semiconductor Research, please click here, or email Ms. Latte Chung from the Sales Department at lattechung@trendforce.com
Insights
Nvidia hosted its fall GTC (GPU Technology Conference) in early November, during which the company shared details regarding the progress that it had made on products and services such as AI software, data centers, automotive applications, and healthcare. In particular, Nvidia’s foray into virtual worlds and digital twins, both of which are closely tied to the metaverse, garnered significant attention from the public. By leveraging diverse simulation tools that reflect real-life circumstances, Nvidia has extended the application of virtual worlds from the local scale to the planetary scale, thereby reflecting the metaverse’s pioneering qualities and developmental progress.
Along with the ongoing metaverse craze, Nvidia also released its Omniverse Avatar technology platform as well as its Omniverse Replicator, which is a “synthetic data-generation engine” according to the company. Both of these releases are based on the Nvidia Omniverse, a platform that specializes in virtual collaboration. Whereas the Omniverse Avatar platform enables the creation of interactive virtual characters through synergies among voice AI technology, machine vision, and NLP (natural language processing), the Omniverse Replicator constructs more realistic, lifelike virtual worlds by training DNN (deep neural networks) using such synthetic data as velocity, depth, and weather conditions.
Digital twin-based virtual factories are starting to show the first hints of the metaverse
The metaverse value chain primarily revolves around commonly seen infrastructural backbones formed by telecommunications and cloud/edge computing. The virtual space that is then built on top of this infrastructure comprises HMI (human machine interface), decentralization, application creation, and user experiences. More specifically, HMI produces an AI-empowered immersive experience by combining multiple interactive technologies with an AR/VR base layer. At the moment, companies such as Nvida, Meta (formerly known as Facebook), Microsoft (including Xbox), and Vive are heavily invested in HMI development. Application creation, on the other hand, refers to mechanisms that make the metaverse more lively, reliable, diverse, and attractive. Some examples include graphical tools and cryptocurrency technologies. Representative groups focusing on this field include Roblox, IBM, Google AI, Epic, and Unity.
Regarding the content of Nvidia’s presentation during GTC apart from the Omniverse Avatar and Replicator, the company also released CloudXR, Showroom, and other Omniverse-based tools used for optimizing immersive experiences. As well, Nvidia also released the Modulus neural network model, which is accelerates the build-out of digital twins. These releases, in turn, demonstrates Nvidia’s competency and leadership in creating AI-driven software tools for the metaverse value chain. With regards to real-life use cases, digital twins currently represent most of Nvidia’s applications. For instance, BMW and Nvidia have partnered to construct a digital twin-based factory via the Omniverse platform capable of connecting ERP (enterprise resource management), shipment volume simulation, remote-controlled robots, production line simulation, etc. This partnership is indicative of promising early-stage growth of the metaverse.
Nvidia is extending its simulation application from factories to planets
While smart city development has remained one of the main use cases of simulation in recent years, Nvidia has further extended its simulation applications from use cases previously limited to singular offices or factory facilities. For instance, BIM (building information modeling) specialist Bentley Systems has teamed up with Nvidia to apply digital twins to public property management and maintenance. Ericsson, on the other hand, is utilizing Nvidia’s technology to construct a digital replica of an entire city for the purpose of checking 5G signal coverages, optimizing base station placement, and improving antenna designs. During the GTC, Nvidia unveiled the Earth-2 system, which is a supercomputer that generates a digital twin of planet earth for weather forecasts.
As a matter of fact, most products and services announced by Nvidia during GTC represent either a partial or entry-level application of the metaverse. However, as the post-pandemic new normal continues to drive up the demand for contactless and digital transformation applications, strengthening CPS (cyber physical systems) will remain one of the most significant trends in the market. As real-world environments become increasingly complex due to interactions among an increasing number of tools and use cases, Nvidia will aim to create a comprehensive framework for metaverse development through products/services based on more intelligent, comprehensive, and instant virtual worlds. Hence, TrendForce believes that Nvidia will need to address certain major challenges going forward, including lowering its tools’ usage barriers, strengthening its ecosystem, and attracting new users.
(Image credit: NVIDIA)
Press Releases
According to TrendForce’s latest report on the server industry, not only have emerging applications in recent years accelerated the pace of AI and HPC development, but the complexity of models built from machine learning applications and inferences that involve increasingly sophisticated calculations has also undergone a corresponding growth as well, resulting in more data to be processed. While users are confronted with an ever-growing volume of data along with constraints placed by existing hardware, they must make tradeoffs among performance, memory capacity, latency, and cost. HBM (High Bandwidth Memory) and CXL (Compute Express Link) have thus emerged in response to the aforementioned conundrum. In terms of functionality, HBM is a new type of DRAM that addresses more diverse and complex computational needs via its high I/O speeds, whereas CXL is an interconnect standard that allows different processors, or xPUs, to more easily share the same memory resources.
HBM breaks through bandwidth limitations of traditional DRAM solutions through vertical stacking of DRAM dies
Memory suppliers developed HBM in order to be free from the previous bandwidth constraints posed by traditional memory solutions. Regarding memory architecture, HBM consists of a base logic die with DRAM dies vertically stacked on top of the logic die. The 3D-stacked DRAM dies are interconnected with TSV and microbumps, thereby enabling HBM’s high-bandwidth design. The mainstream HBM memory stacks involve four or eight DRAM die layers, which are referred to as “4-hi” or “8-hi”, respectively. Notably, the latest HBM product currently in mass production is HBM2e. This generation of HBM contains four or eight layers of 16Gb DRAM dies, resulting in a memory capacity of 8GB or 16GB per single HBM stack, respectively, with a bandwidth of 410-460GB/s. Samples of the next generation of HBM products, named HBM3, have already been submitted to relevant organizations for validation, and these products will likely enter mass production in 2022.
TrendForce’s investigations indicate that HBM comprises less than 1% of total DRAM bit demand for 2021 primarily because of two reasons. First, the vast majority of consumer applications have yet to adopt HBM due to cost considerations. Second, the server industry allocates less than 1% of its hardware to AI applications; more specifically, servers that are equipped with AI accelerators account for less than 1% of all servers currently in use, not to mention the fact that most AI accelerators still use GDDR5(x) and GDDR6 memories, as opposed to HBM, to support their data processing needs.
Although HBM currently remains in the developmental phase, as applications become increasingly reliant on AI usage (more precise AI needs to be supported by more complex models), computing hardware will then require the integration of HBM to operate these applications effectively. In particular, FPGA and ASIC represent the two hardware categories that are most closely related to AI development, with Intel’s Stratix and Agilex-M as well as Xilinx’s Versal HBM being examples of FPGA with onboard HBM. Regarding ASIC, on the other hand, most CSPs are gradually adopting their own self-designed ASICs, such Google’s TPU, Tencent’s Enflame DTU, and Baidu’s Kunlun – all of which are equipped with HBM – for AI deployments. In addition, Intel will also release a high-end version of its Sapphire Rapids server CPU equipped with HBM by the end of 2022. Taking these developments into account, TrendForce believes that an increasing number of HBM applications will emerge going forward due to HBM’s critical role in overcoming hardware-related bottlenecks in AI development.
A new memory standard born out of demand from high-speed computing, CXL will be more effective in integrating resources of whole system
Evolved from PCIe Gen5, CXL is a memory standard that provides high-speed and low-latency interconnections between the CPU and other accelerators such as the GPU and FPGA. It enables memory virtualization so that different devices can share the same memory pool, thereby raising the performance of a whole computer system while reducing its cost. Hence, CXL can effectively deal with the heavy workloads related to AI and HPC applications.
CXL is just one of several interconnection technologies that feature memory sharing. Other examples that are also in the market include NVLink from NVIDIA and Gen-Z from AMD and Xilinx. Their existence is an indication that the major ICT vendors are increasingly attentive to the integration of various resources within a computer system. TrendForce currently believes that CXL will come out on top in the competition mainly because it is introduced and promoted by Intel, which has an enormous advantage with respect to the market share for CPUs. With Intel’s support in the area of processors, CXL advocates and hardware providers that back the standard will be effective in organizing themselves into a supply chain for the related solutions. The major ICT companies that have in turn joined the CXL Consortium include AMD, ARM, NVIDIA, Google, Microsoft, Facebook (Meta), Alibaba, and Dell. All in all, CXL appears to be the most favored among memory protocols.
The consolidation of memory resources among the CPU and other devices can reduce communication latency and boost the computing performance needed for AI and HPC applications. For this reason, Intel will provide CXL support for its next-generation server CPU Sapphire Rapids. Likewise, memory suppliers have also incorporated CXL support into their respective product roadmaps. Samsung has announced that it will be launching CXL-supported DDR5 DRAM modules that will further expand server memory capacity so as to meet the enormous resource demand of AI computing. There is also a chance that CXL support will be extended to NAND Flash solutions in the future, thus benefiting the development of both types of memory products.
Synergy between HBM and CXL will contribute significantly to AI development; their visibility will increase across different applications starting in 2023
TrendForce believes that the market penetration rate of CXL will rise going forward as this interface standard is built into more and more CPUs. Also, the combination of HBM and CXL will be increasingly visible in the future hardware designs of AI servers. In the case of HBM, it will contribute to a further ramp-up of data processing speed by increasing the memory bandwidth of the CPU or the accelerator. As for CXL, it will enable high-speed interconnections among CPU and other devices. By working together, HBM and CXL will raise computing power and thereby expedite the development of AI applications.
The latest advances in memory pooling and sharing will help overcome the current hardware bottlenecks in the designs of different AI models and continue the trend of more sophisticated architectures. TrendForce anticipates that the adoption rate of CXL-supported Sapphire Rapids processors will reach a certain level, and memory suppliers will also have put their HBM3 products and their CXL-supported DRAM and SSD products into mass production. Hence, examples of HBM-CXL synergy in different applications will become increasingly visible from 2023 onward.
For more information on reports and market data from TrendForce’s Department of Semiconductor Research, please click here, or email Ms. Latte Chung from the Sales Department at lattechung@trendforce.com