
Recently fired Intel CEO Pat Helsinger not for the first time repeated that NVIDIA’s Jensen Huang was lucky, but he goes into more depth to justify it.
GPUs have taken center stage in artificial intelligence technologies, and NVIDIA is now one of the most valuable companies in the world, while Intel is in crisis. But it wasn’t always like this. 15-20 years ago, Intel processors dominated computing because they could handle all the mainstream workloads. During this time, Intel missed an opportunity with Larrabee, a project that tried to create a GPU using x86 NVIDIA has made a bet on GPUs — and won.
«The CPU was king of the mountain [in the mid-2000s], and I applaud Jensen for his persistence in just saying, «No, I’m not trying to build one of those, I’m trying to do the workload starting with graphics. You know, that became a broader view. And then he got lucky with artificial intelligence, and one day when I was having a discussion with him, he said, «No, I really got lucky with the AI workload because it requires this type of architecture,» Gelsinger said.
One of the reasons why Larrabee was canceled as a GPU in 2009 was its inability to compete with AMD and NVIDIA at the time. To some extent, this was due to Intel’s desire for Larrabee to be as programmable as possible, which resulted in the absence of key GPU nodes with fixed functions like raster operation units. This affected performance and increased the complexity of software development.
«I had a project that was well known in the industry called Larrabee that tried to combine CPU programmability with a bandwidth-oriented [GPU] architecture, and I think if Intel had stayed on that path, you know, the future might have been different. I have a lot of respect for Jensen, [because] he just stayed true to performance computing».
Unlike AMD and NVIDIA GPUs, which use their own instruction set architecture (ISA), Intel’s Larrabee used the x86 ISA with specific extensions. This was an advantage for general-purpose parallelized computing workloads, but a disadvantage for graphics applications.
«Today we’re thinking about training load, okay, but you’ll have to give something much more optimized for inference. You know that GPUs are too expensive — I’m saying it’s 10,000 more expensive than you need [at this stage]»
As a result, Intel Larrabee was reintroduced as the Xeon Phi processor for supercomputers in 2010. However, it did not gain popularity as traditional GPU architectures gained general-purpose computing capabilities through the CUDA structure as well as the OpenCL/Vulkan and DirectCompute APIs. After Xeon Phi failed to meet expectations, Intel abandoned the project. Who knows, perhaps at some point NVIDIA will lose its advantage due to the excessive price of its solutions, as Intel lost due to their technical unsuitability.
Sources: Tom`s Hardware, Wccftech
Spelling error report
The following text will be sent to our editors: