A video for any doubters that Linux gaming is better than Windows in which it DESTROYS Windows by 25% in AC Odyssey. To put it in perspective, 25% improvement is like getting a new GPU. You can save $600 and instead use something like OpenSUSE Tumbleweed for free.
DISCLAIMER: I don’t really care to make Linux look better but I did a video some days ago and EVERYONE (on Reddit) told me Linux gaming CANNOT be faster or smoother. This is the proof it’s both and more videos will be coming.
I’m sorry, but if you see a 25% difference in a benchmark, that means your methodology is somehow flawed. A few percentage in either direction would be believable, but this difference would be so comical if true, that extra wariness is needed.
There’s a few thing that look a bit off to me, but most importantly it seems like your OBS settings are wildly different between systems. It’s a bit hard to make out, but it seems like you’re doing CPU-based encoding on Linux and GPU-based encoding on Windows.
It was a easy 25% plus gain for me. Apex legends win 10 :1080 upscaled to 1440 AVG 93 FPS
Vs Apex Legends PopOS 1440 AVG 121 FPS
That’s a lot better than 25% when you factor in the resolution difference.
But yeah, windows is a massive resource hog
I am not doing CPU Encoding on any system but there is a difference indeed.
Linux is Gstreamer VAAPI H265 and Windows GPU Encoding H264. In fact, Windows should have had an easier time encoding, I didn’t realize that until now. Also asI have commented on the video the game is on a 980 Pro on Windows and on an HDD on Linux so Linux can be much faster. I will rectify that by getting an SSD to put all my games on in the future.
Beyond that, the methodology is not flawed, if you can even believe that. Everything is on the video for comments exactly like this one.
I see. As I said, it was a bit hard to make out in the video.
Granted, I don’t know too much about AMD’s video encoding solutions, but from a cursory glance on the internet, it seems like their H.264 solution is quite bad compared to H.265. Given that the game is GPU-bottlenecked and your CPU isn’t stressed at all anyway, I’d recommend recording these tests using the CPU to eliminate more variables.
Well, yeah. As much as I’d like to believe, these differences are way too big for me to do that, even with everything you’ve shown in the video. Occam’s Razor would suggest that it’s much more likely that the benchmark/setup is simply flawed in some way, rather than multiple teams of OS-, hardware-, and game developers not realizing a gigantic 25% performance improvement on the table that’s somehow more or less “accidentally” fixed just by using Linux/Proton/DXVK.
Not saying you’re wrong, but it’d need a good chunk more evidence for me to believe that.
More videos like this will be coming. I can’t send you my PC to check its innards to believe, sorry.
I just edited my comment right as you posted, so I’ll put it as a separate comment now:
It would also be interesting to see this game running through DXVK on Windows. That way the calls made to the GPU should be virtually identical, eliminating possible problems with DX11 in the AMD driver.