Don’t use RSR and FSR at the same time.
If FSR is available, use that instead of RSR, as it upscales only the rendered frame, leaving the UI elements at native resolution.
Don’t use RSR and FSR at the same time.
If FSR is available, use that instead of RSR, as it upscales only the rendered frame, leaving the UI elements at native resolution.
X3D has no issues with cooling. I wish that myth would die. The extra cache is over the existing L3 cache, not the cores. The cores have exactly the same amount of inert material above them as with the non-X3D chips.
The chips are lower power solely because of the voltage cap, and any additional binning that said cap allows.
The target market for those chips buys almost exclusively from large OEM system builders, so that’s where all the supply went.
And only four boards so far. What’s actually stranger to me is that no WRX90 boards are known just yet. Everyone knew TR Pro was coming out, but nobody knew if TR was going to be a thing anymore. Yet the TRX50 boards are coming out first (first one available seems to be the ASRock on 11/30).
It would probably have been better for AMD to push the release back a few weeks, so that boards, processors, and decent RAM were all readily available.
That’s kind of my point. Ampere was out, with known specs. RDNA 2 specs were leaked before AMD announced the cards. People who got this leaked information compared number A to number B without understanding that number A was manipulated by dishonest marketing. So they drew the wrong conclusions about performance, saying AMD would be lucky to match the 3070.
Which made it pretty amusing when every one of the first three RDNA 2 cards that AMD released was faster than the 3070, from the 6800 to the 6900 XT.
It wasn’t about bus width, it was about nVidia’s fictitious CUDA core counts with Ampere.
At the last minute, the 4352 CUDA cores of the 3080 (same as the 2080 Ti) was changed to 8704 “CUDA cores”, because the INT32 ALU was replaced with a dual-function INT32/FP32 ALU. People who didn’t understand that (i.e. basically everyone who didn’t call out nVidia’s dishonesty in marketing those figures) thought, from the leaks, that it’d be 8704 shaders against 4608 shaders. It wasn’t. It was more like ~5200-5400 shaders, depending on resolution, against 4608, with the latter running at a substantially higher clock speed.
Ironically, the reverse happened with RDNA 3, as the values leaked were incorrect - they said 12,288 ALU’s for Navi 31, without mentioning that it was really 6144 FP32 ALU’s with 6144 INT32/FP32 ALU’s that could be partially used. So people thought it was 12,288 on the side of Navi 31 versus 16,384 on the side of the 4090, with those numbers meaning the same as they did with 5120 for the 6900 XT versus 10,496 for the 3090. But they didn’t mean the same thing at all. It was ~7400 effective shaders for Navi 31 versus ~10,240 effective shaders for the 4090. With no real clock speed advantage.
As it turns out, the 4090 scales pretty poorly though, so it’s not as far ahead of the 7900 XTX as it should be base on raw compute.
Why on earth do you think it was about just one strategy?
It allows them to increase their addressable market, develop new products to reach even more markets, and diversify their product portfolio further, to making it easier to be a one-stop shop for large customers.
You don’t make a purchase that big for just one reason.
The purpose is to see how the processors aged over the five years since their release. Testing games that span those five years, including titles released this year (which you neglected to list, given your obvious contrarian agenda), is the obvious way to do that.
It was, though I don’t know how long the original was up. It appeared in my RSS feed, but the video was removed by the time I tried to watch it. Either it’s the same video, and the original release was a mistake for timing reasons, or they had to make an edit to remove some kind of mistake or encoding SNAFU.
Get better cables and stop jumping to conclusions about what’s causing your problems.
In Windows, access to the GPU by multiple processes is scheduled, much like access to the CPU is. Without HAGS, all of this scheduling is performed by the CPU. With HAGS, some of the scheduling in particular situations can be offloaded to the GPU.
It has nothing whatsoever to do with how work on the GPU is scheduled across the compute resources.
It will have no measurable impact on gaming performance, and it’s not supposed to, if done correctly. It’s only going to potentially affect performance when multiple applications are trying to use the GPU at the same time to a significant degree.
So if you’re trying to play a game at the same time as you’re using your GPU to render something, then HAGS might slightly reduce the overhead of both tasks sharing the GPU. You still won’t actually notice a difference.