Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.
And this is why I think Valve was very shrewd when it came to deciding what hardware to use. Not only is AMD better supported, but it feels like every update just keeps improving everything.
Doesn’t matter if it’s actually better on paper (I don’t know if it is or isn’t), because it feels like the value only improves.
they wouldnt use nvidia because outside of the driver issues, they dont have an x86 license nor nvidia does semi custom designs for clients.
valves only other option is basically Intel, which at the time, didnt have much emphasis in igpu performance to give valve a decent value/performance ratio
Intel graphics has improved leaps and bounds but it’s still problematic and more poorly supported than AMD.
I imagine part of it (beyond general stuff like Intel trailing AMD in efficiency, both on the CPU and GPU side, as well as the die size being far larger for the same performance, meaning more expensive) is that Valve really didn’t want Intel graphics issues being reported in reviews and forums as being Proton/Linux issues.
On top of that, Intel straight up doesn’t have a custom semiconductor division. AMD does (predominantly for Xbox/PS, but they’re not the only ones).
Intel would either have to set up an entirely new working group for Valve (expensive! Something that Valve would’ve wanted to avoid considering they had no idea whether the Deck would be a hit or not) or they’d have had to go with an off-the-shelf intel CPU.
it mostly improved after tigerlake, but at the time of steam deck taping out designs, intel was still far behind and realistically was not an option. it will down the line given the AI boom has essentially made the igpu a very important piece of hardware, but not when the original deck was designed on paper.
unless intel was going to give valve a really good deal on tigerlake cpus back in 2020, it was not going to happen.
no, because the tegra x1 was a processor originally designed for nvidia shield tv and jetson developer boards. companies like nintendo for the switch and google for the pixel c tablet, used the tegra x1 as an off the shelf chip, which is why all of the listed devices are suscceptable to the rcm exploit, as they are the same chip.
semi custom means they are key functionality added to the chip from oem designs that fundamentally make it different. e. g valve has zen 2 + rdna 2 igpu instead of the off the shelf zen 3 + rdna 2 option. Sony for example has a memory accelerator on the PS5 to give the PS5 faster data streaming capability than standard designs. and supposedly have a compute block for the PS5 pro supposedly for better resolution scaling and ray tracing than standard amd designs.
Nvidia not doing semi custom is the main reason why Apple stopped using nvidia after the GTX 670 in their imac pro lines in favor of AMD, and for example, why nvidia is very strict on the form factor theor gpus are in (e. g theres a reason why a smaller egpu doesnt really exist much for nvidia gpus, while the AMD option is more common, despite being less bought by consumers)
And this is why I think Valve was very shrewd when it came to deciding what hardware to use. Not only is AMD better supported, but it feels like every update just keeps improving everything.
Doesn’t matter if it’s actually better on paper (I don’t know if it is or isn’t), because it feels like the value only improves.
they wouldnt use nvidia because outside of the driver issues, they dont have an x86 license nor nvidia does semi custom designs for clients.
valves only other option is basically Intel, which at the time, didnt have much emphasis in igpu performance to give valve a decent value/performance ratio
Intel graphics has improved leaps and bounds but it’s still problematic and more poorly supported than AMD.
I imagine part of it (beyond general stuff like Intel trailing AMD in efficiency, both on the CPU and GPU side, as well as the die size being far larger for the same performance, meaning more expensive) is that Valve really didn’t want Intel graphics issues being reported in reviews and forums as being Proton/Linux issues.
On top of that, Intel straight up doesn’t have a custom semiconductor division. AMD does (predominantly for Xbox/PS, but they’re not the only ones).
Intel would either have to set up an entirely new working group for Valve (expensive! Something that Valve would’ve wanted to avoid considering they had no idea whether the Deck would be a hit or not) or they’d have had to go with an off-the-shelf intel CPU.
it mostly improved after tigerlake, but at the time of steam deck taping out designs, intel was still far behind and realistically was not an option. it will down the line given the AI boom has essentially made the igpu a very important piece of hardware, but not when the original deck was designed on paper.
unless intel was going to give valve a really good deal on tigerlake cpus back in 2020, it was not going to happen.
The “AI boom” means that Intel is going to take die space from the GPU and give it to an NPU. That’s how you get Windows 11®️ CoPilot™️ cetified.
Isn’t the Tegra X1 on the Switch modified for Nintendo?
no, because the tegra x1 was a processor originally designed for nvidia shield tv and jetson developer boards. companies like nintendo for the switch and google for the pixel c tablet, used the tegra x1 as an off the shelf chip, which is why all of the listed devices are suscceptable to the rcm exploit, as they are the same chip.
semi custom means they are key functionality added to the chip from oem designs that fundamentally make it different. e. g valve has zen 2 + rdna 2 igpu instead of the off the shelf zen 3 + rdna 2 option. Sony for example has a memory accelerator on the PS5 to give the PS5 faster data streaming capability than standard designs. and supposedly have a compute block for the PS5 pro supposedly for better resolution scaling and ray tracing than standard amd designs.
Nvidia not doing semi custom is the main reason why Apple stopped using nvidia after the GTX 670 in their imac pro lines in favor of AMD, and for example, why nvidia is very strict on the form factor theor gpus are in (e. g theres a reason why a smaller egpu doesnt really exist much for nvidia gpus, while the AMD option is more common, despite being less bought by consumers)
Thanks for the info and examples!
“Better supported” is an understatement. AMD on Linux requires no handling of drivers whatsoever, so far as the user is concerned.