How fast are Intel dedicated graphics cards?

The graphics card market for enthusiasts is now dominated by a duopoly made up of AMD and NVIDIA, but Intel does not want to be left behind and is keen to enter that market in full force. Their bet is the development of the Intel Xe architecture, which so far we have only seen in the Intel Lake Field based on DG1, but they have plans to enter the market for dedicated graphics cards with their DG2 architecture. But what do the latest rumors say about it?

Intel is not synonymous with graphics power, its last two attempts to enter as a serious competitor ended in a resounding failure. First with the Intel i740 that appeared in the late 90s and a failure later with the Intel Larrabee, which evolved into the Xeon Phi to fall completely into oblivion. But it is said that the third time's a charm, and Intel has a huge strategic interest and especially as a defense against AMD in having competitive graphics hardware.

It seems like now, with the hiring of Raja Koduri, Intel has a real chance of entering the dedicated graphics market. In this article I will update you with all the latest rumors about the Intel HPG(aming) GPUs, to be perfectly clear we are not talking about Intel's server-side graphics cards, we're focusing on gamers. This is going to be merely speculative, but based on the information we have now and comparing it with NVIDIA's and AMD's latest graphics cards.

Intel GPU


How fast are Intel dedicated graphics cards?

The first Intel's dedicated graphics card is coming in late 2020, and it is going to be for laptops, the Iris Xe MAX, which is already included in new models by laptop manufacturers such as Acer and Asus. Intended for content creation and light gaming, these laptops will have a slightly higher performance than those equipped with Intel's Xe integrated graphics. For perspective, the Intel Xe G7 iGPU is comparable to NVIDIA's MX350, sure it is an unfair comparison as the MX350 is a dedicated graphics card.

Enters the Iris Xe Max, based on the same architecture as the Xe G7 iGPU, but having the advantages of being a dedicated GPU and therefore having dedicated VRAM (4GB in the case of the Asus laptop), higher clocks and higher TGP the Iris Xe Max is expected to have a performance higher than that of a MX350, but lower or close to the performance of a MX450. These NVIDIA GPUs are manufactured based on the Pascal and Turing architectures respectively, the newest being an entire architecture behind the newest NVIDIA Ampere graphics cards. The green team could launch an Ampere GPU for thin and light laptops not too long after Intel, making the blue team's efforts almost pointless if they do not step up their game.

Extrapolating current Intel's performance on Intel Xe iGPU and Iris Xe Max, we can expect that the dedicated graphics for desktops will reach performances close to that of the NVIDIA Turing generation, but keep in mind Intel seems to target the budget-mid sector only. This is due to, according to leaks, Intel Xe HPG going from 960 to 512, this is 4096 "Stream Processors" which are equivalent to 64 Compute Units, being a number equivalent to that of the Navi 21 XL GPU from AMD, or the RTX 3070 from NVIDIA. However, as said before, Intel's first dedicated GPU's architecture (not taking into consideration previous failures) is expected to be closer to the older NVIDIA Turing generation in terms of performance.

Taking everything into consideration this is a great start for Intel in the dedicated GPU market, we should not expect the new player (even if it is a giant) to come in blasting and sweeping the floor with the likes of NVIDIA and AMD. Instead we must focus on the benefit of having more competition and therefore better prices and performance. Nevertheless we expect Intel to step up their game in the future, and have non-gradual improvement, because as we all know, gradual improvement in the CPU-side made the market rather stagnant.



Intel DG2, the architecture of the Xe-HPG

Intel has not yet made an official presentation of its DG2 architecture, which will be focused on the enthusiastic gamer market and its objective would be to enter one-on-one in a battle against  the RTX 3000 from NVIDIA and the RX 6000 from AMD, said GPU would have elements in common with its rivals such as Ray Tracing hardware acceleration and Variable Rate Shading.

Apart from that we know from the driver leaks that the entire Intel Xe HP line, based on DG2, could have integrated neural processors in the style of NVIDIA's Tensor Cores.

Otherwise it would be an evolution of the Intel DG1 that we have already seen in the Intel Lakefield but on a larger scale due to the fact of having a much wider GPU by having a greater number of processing units.


Intel Xe HPG leaks and rumour corrections

Recently new rumors have appeared about the GPU with which Intel wants to stand up to AMD and NVIDIA in the higher-end graphics cards.

Firstly there are talks about the manufacturing process used, which would be TSMC's N6, so Intel would not be the manufacturer of its own GPU. The 6 nm process of TSMC is an EUV process that is thought of as an evolution of the 7 nm process and which results in an evolution of it. This means that designs for N7 can be carried over to N6 and take advantage of an additional 20% in terms of the number of transistors per mm2.

Another rumour is that Intel Xe HPG graphics card would carry about 16 GB GDDR6, although this seems very unlikely as not even the RTX 3080, a better performing GPU than what Intel could do, has this amount of VRAM.


Monolithic chip instead of chiplets

The decision to use a monolithic chip would have caused a cut in the configuration in terms of the number of EUs planned for the Intel Xe HPG from 960 to 512, this is 4096 "Stream Processors" which are equivalent to 64 Compute Units, so without taking into account the differences in efficiency of each architecture would be an equivalent to the Navi 21 XL GPU from AMD, so it would compete against the RX 6800 from AMD and the RTX 3070 from NVIDIA, all with an energy consumption between the 150 and 200 W.

As for the price at which these cards would be launched, we do not know until Intel pronounces and confirms it, but it is supposed to be between $ 400 and $ 500.


Why would Intel have canceled the design with multiple tiles or chiplets?

Not long ago, Raja Koduri showed us the prototypes of the Intel Xe GPUs with 1, 2 and 4 tiles or chiplets and rumors began to speculate that they could be the chips with which Intel would compete against NVIDIA and AMD.

There are two reasons why with respect to making a dedicated graphics card for the enthusiastic gamer market, Intel would have opted for a monolithic design, that is: a GPU made up of a single chip.

  • The first one is that dividing a single GPU into several chiplets means that the bandwidths communicated by the different parts have to be immense, the problem with the external interfaces of the processors is that they consume a lot and it is necessary to pull exotic solutions. like the use of interposers and TSV cabling that make the product extremely expensive.
  • The second possibility involves having two complete graphics chips or more in the same interposer, each of them is independent but the domestic software only uses one GPU and ignores the others, placing a double and quadruple GPU would be a waste of time that no game would take advantage of , so it is better for Intel to make a monolithic chip.

But there is a reason why Intel had decided to go for a Tiles / Chiplets configuration, which is none other than its own 10nm process performs poorly after the chips exceed a certain size. This is the reason why their 11th generation processors are going to come out under their 14nm process and it is the reason why once Intel has decided to manufacture its GPU in TSMC and make a monolithic chip for the Intel Xe HPG.

On the other hand, this does not affect Intel's plans for the rest of the Intel Xe HP ranges where Intel has already shown the idea of manufacturing GPUs using the second point, that is, several GPUs in the same interposer. The monolithic design would therefore only be for the Intel Xe HPG if the latest rumors were confirmed.