Hi All,

I need some help understanding FSR and how to use it.

Until recently, my only piece of gaming hardware was the Steam Deck. On this, the native (OS level) FSR easy to understand: drop the in-game resolution to something less than the display’s native 1280×800 and enable FSR, which then upscales the image. This makes sense to me.

Recently, I got myself a dedicated gaming PC as well (running 6700 XT). I’ve been playing around with the FSR option using Hellblade Senua’s Sacrifice as a benchmark, running DX11 mode and without ray tracing. I’m using a 1080p display.

From AMD’s control software, a ‘Radeon Super Resolution’ (RSR) mode can be enabled, which I understand is basically the same FSR as is running natively on the Steam Deck. It does nothing if the in-game resolution is the same as the display’s native resolution, but as soon as the in-game resolution is lowered, it applies spatial upscaling. So I drop my in-game resolution to 720p, enable RSR, and I can see the upscaling at work. This also makes sense.

Where I get confused is how in-game FSR fits into the picture. So Hellblade has native (in-game) FSR implemented. When running at 1080p resolution with no FSR and all settings maxed, I typically have close to 100% GPU utilization. Now, when I enable FSR in-game, still running at 1080p resolution, the GPU utilization drops to 75-80% with almost no visual impact (slight sharpening it seems, but wouldn’t notice without side-by-side screenshots). Framerates are of course more stable with the lower utilization.

So I don’t quite understand how this works. Does the game automatically render at a lower resolution (without having to adjust the in-game resolution) and upscales? Or why is it not necessary here to change the in-game resolution? Do all games implement native FSR in this manner?

Also, should the two be used mutually exclusively? I tried enabling both (so enabling SRS in AMD’s control software, dropping in-game resolution to 720p, and enabling in-game FSR). It worked, but certainly looked strange; not sure how to explain, almost like this watery effect. I’m assuming in this instance upscaling was applied twice?

Anyway, some insight would be much appreciated!

  • ET3D@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    First of all, it’s worth noting that FSR has several versions, namely 1,2 and 3. FSR 1 (what the Steam Deck, RSR and Hellblade offer) simply upscales. FSR 2 gets motion data, and extrapolates from multiple frames, so has significantly higher quality. FSR 3 adds frame interpolation.

    When it comes to game vs. drivers when using FSR 1, then main difference is that the game will render the UI natively over an upscaled image, while the drivers will upscale the entire frame including UI. Therefore the UI in the game implementation will look better.

    So it’s better to enable FSR in the game rather than the driver, and fall back to the driver only if the game doesn’t implement FSR. Most games implement FSR 2, which is significantly better than what the driver can do, but even with FSR 1, the UI difference will make the game implementation preferable.

  • RealThanny@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Don’t use RSR and FSR at the same time.

    If FSR is available, use that instead of RSR, as it upscales only the rendered frame, leaving the UI elements at native resolution.

  • Lkr721993@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    if a game has native FSR options, it is generally recommended to use that and disable super resolution. this will handle the resolution stuff automatically based on the presets of quality/balanced/etc.

    think of super resolution as FSR for games that don’t have FSR. there are no FSR options in the graphics menu for these games, so you have to trigger it by lowering the resolution in the settings, as you have done.

  • Cave_TP@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    They’ve already gave you answer for how FSR works so I’ll try to help with the utilization problem.

    My guess is that, since you’re pretty much lowering the resolution, your GPU would be able to render more frames but the CPU can’t keep up.

    In a situazione like this the better option would be to either run at native or maybe turn on VSR, set the resolution to 4K and set the game to render at 4K with FSR on performance, that way you’ll emulate Nvidia’s DLAA that uses the upscaler as anti aliasing.

    All this assuming the framerate is not limited by something like Vsync.

  • Cryio@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    You’re playing Hellblade, so here’s some tips:

    If an UE4 game can be played in DX12, play it in DX12. Hellblade will be 100% GPU bound at all times in DX12 if fps is unlocked, it will become CPU bound, which is bad, in DX11.

    As for upscaling, you can mod instead FSR2 instead. It will look better than driver RSR/in-game FSR1 and runs just as well.

  • Afinda@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Alright, long story short:

    FSR2.x works different than FSR1, it requires game-engine level information (motion vectors and stuff) to not only up-scale to target resolution but also reconstruct detail and apply sharpening whereas FSR1 doesn’t and was just a tad better upscaling algorithm.

    To be able to use FSR 2.x, it needs to be natively supported by the game for that game-engine level information.

    Typically FSR 2.x, like DLSS, is split into different Quality levels. Each will result in the original rendered frame to be at a lower resolution than the target resolution to be then scaled back up to the target resolution and sharpened:

    • Quality - 67% (1280 x 720 -> 1920 x 1080)
    • Balanced - 59% (1129 x 635 -> 1920 x 1080)
    • Performance - 50% (960 x 540 -> 1920 x 1080)
    • Ultra Performance - 33% (640 x 360 -> 1920 x 1080)

    Source (AMD)

    So why do you see less utilization then?

    There’s two things at play here

    1. Upscaling and Reconstruction are cheaper than rendering a native frame but still a tad more expensive than simply rendering at a lower target resolution (cheaper/expensive in terms of calculation time spent)
    2. Less required performance for image rendering can lead to a shift towards more draw calls required by the GPU for more utilization and in turn result in a CPU bottleneck if the CPU can’t keep up with the now less strained GPU.

    Bonus: If you lock your framerate, target FPS is reached but GPU utilization is low and yet there’s no stutter: You GPU can easily handle what’s being thrown at it and doesn’t need to go the extra mile to keep up.

  • gpolk@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    You set FSR on, and then a level of it usually called something like Balanced, Performance, Quality, etc. Sometimes its a % slider of a render scale. Then the game is like you were experiencing with your previous set up, rendering internally at a lower resolution, and then within the production of that frame it is being upscaled to 1080p and sharpened. You set in game resolution to your display resolution, and then the quality setting for FSR determines what the render resolution is. Sometimes there will also be a setting for sharpening.

    I wouldn’t use any other upscaling if you’re already using in game FSR. The in game version is also the best to use as it’s better implemented within the game to cause less issues with things like HUD elements.