Unlocking the Potential of AI Chips: ChatGPT Leads the Way

February 13, 2023
Recently, ChatGPT has become popular and the AI chips that provide computing power for ChatGPT have become a key infrastructure investment for various manufacturers.

AI applications like ChatGPT require a large amount of model training, and their ability to intelligently process vast amounts of data is supported by powerful computing capabilities. Taking the GPT-3 model as an example, its ability to store knowledge comes from its 175 billion parameters, and the computing power required for training reaches 3650 PFLOPS days.

Is Nvidia Welcoming a Market Opportunity?

According to relevant data, the growth of computing power used for AI training has followed Moore’s Law, roughly doubling every 20 months. The emergence of deep learning has accelerated performance expansion, with computing power for AI training doubling roughly every 6 months. However, with the appearance of large-scale models, the training computing power required is 10 to 100 times that of previous models.

It’s no wonder OpenAI had Microsoft spend $1 billion to build one of the top five supercomputers in the world before working on GPT-3. ChatGPT announced on February 7th that it was suspending its service due to network constraints and that ChatGPT had been down many times before.

Nvidia GPU
Nvidia GPU

ChatGPT mainly involves AI natural language processing-related technologies, and the underlying computing power chip is mainly based on a high-performance GPU. One AI executive said that Nvidia, which dominates the market for chips optimized for large model training, is the biggest winner from ChatGPT.

From the perspective of chip technology, Nvidia’s CUDA architecture was originally used for game GPU, which is more suitable for large-scale parallel computing than CPU. Based on its CUDA ecosystem, a perfect developer ecosystem has been accumulated. At the beginning of this round of AI industry development, there were no specialized AI acceleration chips and hardware products on the market, and developers found that their products could be applied to the training and inference process of AI, so they mostly adopted their products.

In addition, Nvidia’s products have advantages in versatility and computing power density, and because of the huge algorithm model, the demand for system-level multi-chip interconnection cooperation and high-bandwidth network storage is increasing exponentially. Nvidia has long had a layout for this and has formed a complete and mature solution by means of acquisition and research and development integration.

ChatGPT took the world by storm, Demand for AI chips is growing

The world’s major technology companies have already announced their progress in ChatGPT, and no matter how much they spend and what the prospects are for competing products, it’s clear that the company that owns the key underlying chip will benefit from ChatGPT’s popularity.

The demand for AI chips is growing in the context of the ChatGPT boom sweeping the world and AIGC will generate a huge market for computing power.

At present, the CPU and GPU chips, which are very mainstream, have been gradually cultivated and developed by the US government and many European and American technology companies for more than 30 years. This field, from instruction sets, and chip design software, to the lithography machines needed for manufacturing, already has a very mature layout. In addition to the various technical patents and industry standards that can be seen on the surface, there are a large number of industry precipitation, which has formed a huge industry system.

Demand for chips is growing
Demand for chips is growing

Continue to study AI computing power

Artificial intelligence has been developed for many years, and the commercialization process has accelerated in recent years. The research on the underlying computing power of AI is even earlier. For the training of large models such as ChatGPT, including commercial giants and head research and development institutions have continued to invest, such as China’s Baidu, Huawei, Alibaba, etc., its investment scale has exceeded 10 billion, mainly the continuous investment of data, hardware, and talent. It is foreseeable that there is still a long way to go, and the application of large models is on the eve of the outbreak of the industry, with great potential.

In the field of chips, although Nvidia occupies the first advantage, other manufacturers are catching up. The company with high-performance AI ability is continuously improving in the AI ecological field.