News Technologies 07-16-2025 comment views icon

Intel introduces a quality meter for scaling and frame generators in games

Intel introduces a quality meter for scaling and frame generators in games

Intel has released a software utility that can evaluate the picture quality in games using an upscaler and a frame generator.

A new set of tools using an artificial intelligence model should help humanity to objectively evaluate the image generated by AI. As you might guess, this primarily applies to various FSR/DLSS/XeSS modes and frame generators, of which there are already quite a few varieties. The new computer graphics quality metric is called Computer Graphics Visual Quality Metric (CGVQM) and is already available for download from Github.

It’s not uncommon for modern games to render images in non-native resolutions. And with the use of these technologies to increase the frame rate, a whole bunch of problems can arise that negatively affect the image quality. Halos, flickering, noise, annoying specks, and many other related «effects» are already well known to modern gamers. Until now, they could only be measured visually.

Intel представила вимірювач якості масштабування та генераторів кадрів в іграх
An example of graphic distortion

To create a better evaluation tool, Intel took a dual approach to solving the problem in «CGVQM+D: Computer Graphics Video Quality Metric and Dataset».

First, the researchers created a new dataset called the Computer Graphics Visual Quality Dataset (CGVQD), which contains various types of effects that negatively affect some images. Among them are distortions caused by ray tracing, neurosuper-sampling, frame interpolation, and several other lesser-known ones.

Secondly, engineers have taught AI to distinguish and evaluate these «troubles». The neural network is able to evaluate the final image after rendering in real time and, as a result, compare it.

The next step was to compare the AI’s assessments with real people. Test participants rated the distortions selected by Intel on a scale from «imperceptible» to «very annoying». This way, the neural network received authentic data from real observers to rely on. At the same time, the model itself was chosen for computing three-dimensional (3D-CNN)and for the residual bonds ResNet was used.

The paper argues that the CGVQM model outperforms almost any other similar image quality assessment tool, at least on its own dataset. The researchers also demonstrate that their model is good at detecting distortion in cases where proprietary data was not used.

The article leaves some open questions to improve the utility. For example, it would be interesting to know how it would work with 3D CNNs instead of transformation modelwhich ChatGPT and other modern services are based on. Here, the researchers shrug their shoulders, because their computing capabilities are not yet sufficient for this.

Source: Tom’s Hardware


Spelling error report

The following text will be sent to our editors: