When NVIDIA (NVDA) announces its second quarter on August 27, investors will focus on the results of the company’s data center. After all, the giant of the chips understands the income for the sale of its high power AI processors.
However, the data center segment includes more than just chip sales. It also reflects some of the most important proposals of the NVIDIA, though often undetected,: its network development technologies.
NVIDIA network network products, made up of their NVlink, Infiniband and Ethernet Solutions, allow their tokens to communicate with each other, allow servers to talk to huge data centers and finally ensure that the end users can connect to AI programs.
“The most important part of the supercomputer creation is the infrastructure. The key is how you will connect those calculation engines to make that larger computing section,” explained Giladas Shainer, vice president of the NVIDIA network.
Jensen Huang, CEO of NVIDIA, is visiting the 9th Vivatech Exhibition Parc des Expostions de la Porte de Versalles 2025. June 11th. In Paris. (CHNOT/Getty Images Photo) ·Chesnot through Getty Images
It also means big sales. NVIDIA networks sales of $ 12.9 billion $ 115.1 billion in data center revenue in the previous fiscal year. This may not seem impressive when you think chips have brought $ 102.1 billion, but it has a $ 11.3 billion dollars that Nvidia is the second largest segment of Gaming for a year.
In the first quarter, the network was $ 4.9 billion in NVIDIA $ 39.1 billion data center revenue. It will continue to grow when customers continue to accumulate their AI capacity, whether in research universities or in huge data centers.
“This is the most valued part of the NVIDIA business by size,” said Gene Munster, a Deepwater Asset Management partner, told the Yahoo Finance. “Basically, network creation does not get attention because it is 11% of revenue. But it grows as a rocket ship.”
As far as Ai explosion is concerned, Kevin Deierling, senior vice president of the network, says the company has to work on three different types of networks. The first is its NVLink technology, which connects the GPU on the server or several servers on a high, cabinet -like server shelf, allowing them to communicate and strengthen overall performance.
Then there is an Infiniband that connects several server nodes through data centers to form what is essentially a massive AI computer. There is then a front storage and system control network that uses the ethernet connection.
Jensen Huang, CEO of NVIDIA, is presenting Grace Blackwell Nvlink72 as it provides the main address of Consumer Electronics Show (CES) in Las Vegas, in Nevada 2025. January 6 (Patrick T. Fallon/AFP photo via Getty Images) ·Patrick T. Fallon via Getty Images
“All these three networks must create a huge AI scale or even a medium-sized company scale, AI,” explained Deierling.
However, the purpose of all these different relationships is not just to help chips and servers communicate. They are also designed to allow them to do it as soon as possible. If you are trying to run a server series as one calculation unit, they must talk to each other in an instant.
Data deficiency for GPU slows down the entire operation, delaying other processes and affects the overall effectiveness of the entire data center.
‘[Nvidia is a] Very different businesses without networks, ”Munster explained. [are] The desire would not happen if it was not their network. ‘
And companies continue to create larger AI models and autonomous and semi -autonomous AI opportunities that can perform tasks to consumers, to make sure they are working with each other.
This is especially true for the conclusion – to run AI models – requires more powerful data center systems.
The PG industry is widely transformed around the idea of conclusion. At the beginning of the explosion, it is thought that teaching AI models will require extraordinarily powerful AI computers and that they will actually run a little less energy.
This led to some excitement of Volstryte early this year, when Deepseek said she taught her AI models below NVIDIA chips below the highest quality. At the time, thinking was that if companies could train and control their AI models on insufficient chips, then no NVIDIA is needed for expensive large systems.
However, this narrative quickly turned around when Chip Company emphasized that it is useful for the same AI models to operate on powerful AI computers, allowing them to justify more information than they would work on less advanced systems.
“I think there is still a misconception that conclusions are insignificant and easy,” Deeierling said.
“It turns out he starts to look increasingly like a workout as we go [an] agent workflow. So all these networks are important. Having them together tightly connected to CPU, GPU and DPU [data processing unit]All of this is vital to make good experiences. ‘
However, NVIDIA competitors are still collecting. AMD wants to attract more market share of the company, and cloud giants such as Amazon, Google and Microsoft continue to create their own chips.
Industrial groups also have their own competing network development technologies, including Ualink, which is to go to head with NVlink, Forrester analyst Alvin Nguyen explained.
But so far, NVIDIA continues to run the packaging. And since technology giants, researchers and companies continue to fight for NVIDIA chips, the company’s network business, but is guaranteed.
Sign up for the Yahoo Finance Week in Tech information bulletin. ·Yahoofinance