I don’t really understand the tone. The guy seems to understand roughly how the rendering pipeline works and the pros and cons of each solution. By doing it, he sort of answers his own question of why did AMD decide to go with this, and so I don’t understand why he sounds so dumbfounded about it.
I am not here to defend AMD but working in the industry I can say that there are plenty of reasons for why AMD did what they did: competitive, managerial, technical, budget and, of course, developer relations.
For one, it takes a lot less time to address this issue by developing a library that hooks to every game possible and make it faster without ever needing to talk with the developer. Is it a hack? Of course it is and they know it. But it may have been necessary for AMD to give a quick and cheap answer to players in a competive market. Their answer does come with lots of caveats but it probably achieves 80% of quality with 20% effort as compared to Nvidia. Enabling it by default is bad I guess, especially because it can break games. However, not all games are played competitively online nor have anti-cheat software, so I don’t understand either why just focus on CS and whatnot. Again, it should not have been the default and it should come with CAPS disclaimer that the feature can result in banning. Especially if they know what games are, they can even have made it impossible to turn on the DLL for such games. With that, I agree.
Back to the question. Gamers in general and people doing these reviews often downplay the magnitude of the work involved in creating a stable foundation. Let’s say all of a sudden a company has to engage with 100 game developer companies about a “potentially new SDK prototype”. It’s not simply “hey, we developed this, use it”. It takes time to first build something that’s barely usable, understand and get feedback of how it can integrate in the developer’s workflow, ask developers to add another dependency and another level of testing which incurs in costs both for AMD/Nvidia and the developers. While all of that is happening you have your boss knocking on your door asking “why are we losing to the competitor?”. That applies to mid-level management as to game developers.
AMD using the described method (injecting DLLs into the games’ processes) is as terribly hacky as it is a good of a short/mid term plan. Of course, that doesn’t rule out the long term plan of talking with developers for a proper, longer term plan.
At the end of the day I find it quite positive that we have at least 2 companies with 2 different strategies for the same problem. If anything, that means we learn with it and we have more variety to choose from.
I don’t really understand the tone. The guy seems to understand roughly how the rendering pipeline works and the pros and cons of each solution. By doing it, he sort of answers his own question of why did AMD decide to go with this, and so I don’t understand why he sounds so dumbfounded about it.
I am not here to defend AMD but working in the industry I can say that there are plenty of reasons for why AMD did what they did: competitive, managerial, technical, budget and, of course, developer relations.
For one, it takes a lot less time to address this issue by developing a library that hooks to every game possible and make it faster without ever needing to talk with the developer. Is it a hack? Of course it is and they know it. But it may have been necessary for AMD to give a quick and cheap answer to players in a competive market. Their answer does come with lots of caveats but it probably achieves 80% of quality with 20% effort as compared to Nvidia. Enabling it by default is bad I guess, especially because it can break games. However, not all games are played competitively online nor have anti-cheat software, so I don’t understand either why just focus on CS and whatnot. Again, it should not have been the default and it should come with CAPS disclaimer that the feature can result in banning. Especially if they know what games are, they can even have made it impossible to turn on the DLL for such games. With that, I agree.
Back to the question. Gamers in general and people doing these reviews often downplay the magnitude of the work involved in creating a stable foundation. Let’s say all of a sudden a company has to engage with 100 game developer companies about a “potentially new SDK prototype”. It’s not simply “hey, we developed this, use it”. It takes time to first build something that’s barely usable, understand and get feedback of how it can integrate in the developer’s workflow, ask developers to add another dependency and another level of testing which incurs in costs both for AMD/Nvidia and the developers. While all of that is happening you have your boss knocking on your door asking “why are we losing to the competitor?”. That applies to mid-level management as to game developers.
AMD using the described method (injecting DLLs into the games’ processes) is as terribly hacky as it is a good of a short/mid term plan. Of course, that doesn’t rule out the long term plan of talking with developers for a proper, longer term plan.
At the end of the day I find it quite positive that we have at least 2 companies with 2 different strategies for the same problem. If anything, that means we learn with it and we have more variety to choose from.