Intel XeSS Upscaler Plugin Is Now Available in Unreal Engine!

Author

Sreyasha

Date

Nov, 19.2022

After declaring Unreal Engine support in March, Intel has launched XeSS integration into Unreal Engine 4 and 5 as a plugin option. This will make XeSS integration easy into Unreal Engine projects for the developers without manually integrating XeSS’ SDK. 

The XeSS integration also realizes Unreal Engine’s compatibility with all four upscaling models from Nvidia, AMD, and Intel, including DLSS, NIS, FSR, and XeSS. There is also support for Unreal Engine’s proprietary upscaling solutions. 

The XeSS plugin for Unreal Engine can be operated with versions 4.26, 4.27, and 5.0. The plugin is available exclusively on Github for the time being, but we expect it to come on the Unreal Engine marketplace soon - like AMD’s FSR and Nvidia’s DLSS. 

Intel’s plugin can replace Unreal Engine’s Temporal Anti-Aliasing (TAA) with XeSS, applying the upscale after the rasterization as well as lighting stages are complete in the rendering pipeline, with integration occurring at the beginning of the post-processing stage. XeSS upscales necessary parts of the rendering pipeline while leaving other parts of the game – like the HUD, rendered at native resolution for better image quality. 

Xe Super Sampling (XeSS) is Intel’s take on temporal resolution upscaling, which completes AMD’s FidelityFX Super Resolution (FSR) as well as Nvidia’s Deep-Learning Super Sampling (DLSS). XeSS aligns closely with DLSS as an AI-generated upscaler that utilizes AI-trained networks in order to upscale images. XeSS has various modes for operation on different GPU types. 

These two modes include a “higher level” version that works on Intel’s XMX AI cores found exclusively in its Arc Alchemist GPUs and a “lower level” mode that runs on the DP4a instruction set that can be operated on other GPU types, including those from Nvidia and AMD. 

Not much information is known about these modes and their original quality and performance differences. We also do not know whether the DP4a model uses an alternative trained network compared to the main version that works on Intel’s XMX cores. Being different does not mean higher or worse image quality, but we would not be amazed if the DP4a version features some performance and visual sacrifices. 

DP4a requires running on GPU shader cores along with INT8 operations, which are slower compared to Intel’s XMX cores. XMX supports INT8 and can process those operations more quickly as a result.