HWInfo64 (version 8.24) does not identify correctly Intel integrated GPU on Intel CPU Core 5 120U

HWInfo64 (version 8.24) does not identify correctly Intel integrated GPU on Intel CPU Core 5 120U.
It identify the integrated GPU as an Intel Iris Xe even though Intel ARK Specifications for that processor indicates "Intel Graphics"
Any reason for that?
(To be note that Intel Support Utility indicates "Intel Graphics")

1743597975323.png
1743598019408.png
 
I was wandering how does HWInfo identify de GPU model?
Intel publish in ARK an unique ID for each integrated GPU.
Does HWInfo use that ID or does it obtain the model otherwise?

The published GPU Device ID for the Core 5 120U (and also for the Core 5 220U) is 0xA7AC while it is 0xA7A1 for the Core i5 1335U which effectively offers an Iris Xe GPU.

Also, how does HWInfo determine if the GPU is in Irix Xe mode (when the system has Dual Channel memory offering 128 bits path access) vs in Intel Graphics mode (when the system has not Dual Channel Memory), for the Core i5 1335U CPU per example (should not be applicable to the 120U)?
(same consideration for ARC mode on newer processors)
 
Primarily it's based on the ID but there are also other criteria (like you already mentioned or number of EUs).
HWiNFO can detect the number of memory channels.
 
OK, but with recent computers with DDR5, these are DIMMs designed as 2 channels x 32 bits and does not (in my understanding) meets Intel criteria to be 2 channels x 64 bits, so with 128 bits memory access:
"To use the Intel® Iris® Xe brand, the system must be populated with 128-bit (dual channel) memory. Otherwise, use the Intel® UHD brand."

So DDR5 systems with only 1 DIMM is identified as 2 channels Memory but these are 2 x 32 bits and does not meet the criteria.
By exemple, on a system with 1 DIMM (CPU is 155H but ):
1743617609284.png
For the 155H, the GPU mode is ARC, essentially the same criteria but expressed a little differently:
H-series Intel® Core™ Ultra processor-powered systems with at least 16GB of system memory in a dual-channel configuration

So we understand that HWInfo cannot retrieve GPU mode with 100% accuracy because it has to assume things and cannot retrieve the current mode from the system!
 
Some (awkward) marketing ideas are hard to meet. Technically the GPU is the same.
 
Exactly what we believe that the GPU is the same and have the same functionalities, just that with a 128 bits channel (dual channel), it can access GPU Memory faster in one 128 bits step.... similar to when the memory is Dual channel 2 x 64 bits, different cores can access different memory location at the same time in 64 bits (or with DDR5, multiple cores cans access 4 different memory locations in 32 bits at the same time).

So, as you indicates, some (awkward) marketing ideas that just introduce a lot of confusing in the market and users!

Some internal testing seems to indicate between 15% to 25% performance increase for the GPU with 128 bits Channel vs single 64 bits channel (on a DDR5 DIMM dual Channel 2 x 32 bits with an Core Ultra 7 155H CPU) - Could be less increase with a system with DDR4 DIMM single Channel 1 x 64 bits if the CPU performance tests does or does not user 32 bits words or 64 bits words, did not test it!

Thanks for the replies!
 
Last edited:
Back
Top