• 14 Posts
  • 241 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle

  • Bazzite. Here’s why:

    • Optimised for gaming (gaming optimised kernel, common tweaks pre-applied, all common gaming apps pre-installed like Steam, Mangohud etc)
    • All necessary drivers pre-installed (game controllers, RGB, and even proprietary nVidia)
    • A Steam-Deck like gaming experience, if you want (the Deck variant boots directly to Steam)
    • Immutable and atomic (image-based OS updates, so updates either work or don’t - there’s no chance of a broken state)
    • Easy rollbacks (just select the previous image in the GRUB menu)

    But since you said:

    how to squeeze the best performance out of this

    and if you’re really serious about squeezing the best performance, then check out the Arch-based CachyOS - unlike most other Linux distros, Cachy has optimised x86-64-v3 and v4 packages in their repos, which means apps can make use of advanced CPU instructions such as SSE3, AVX512 etc. Most other Linux distros on the other hand still use x86-64-v1 for compatibility reasons, which unfortunately means that you’d be missing out on all the cool new optimised CPU instructions introduced over the past 16 years.

    You can read more about microarchitecture levels (aka MARCH) here: https://en.wikipedia.org/wiki/X86-64#Microarchitecture_levels

    In addition to the MARCH, Cachy’s packages have other optimisations such as LTO/PGO, optimised kernel with the BORE and Rusty schedulers which are better for gaming, plus several performance-oriented tweaks which you’d otherwise have to do manually on Arch (such as makepkg.conf tweaks, pacman.conf tweaks etc).

    Finally, Cachy are always on the bleeding edge when it comes to gaming/driver/kernel/performance related stuff, so you’ll get all the good stuff even before Bazzite or other optimised distros. For instance, Cachy was the first distro to include the new nVidia driver which has explicit sync support for better Wayland compatibility, and they’re always on top of major Arch developments and provide detailed announcements which are relevant to gamers and performance freaks.

    Eg, here’s their recent recent nVidia announcement:

    Hi @here,

    as you maybe noticed, we have rolled out the new NVIDIA Driver, which includes the explicit sync protocol and tearing for Vulkan. We have been prioritized to move this forward to finally resolve the wayland situation. Additionally arch has pushed CUDA to 12.5, which is NOT compatible with the current 550 driver (it needs the 555 Driver).

    The beta driver is not perfect, but so far we are applying some fixes to avoid issues and restore performance problems with disabling the GSP Firmware load. This is handled via the “cachyos-settings” package.

    Anyways, since some people maybe have problems with this driver, here is a short instruction to manually downgrade and block the driver:

    […]

    If you are facing issues with the new NVIDIA Driver, reproduce the issues and then run “sudo nvidia-bugreport.sh” and report it to their forum: https://forums.developer.nvidia.com/c/gpu-graphics/linux/148

    We are also shipping now an precompiled nvidia-open module. This will be also as default installed for users, which have supported cards as soon NVIDIA releases the 560 drivers.

    The CachyOS Team

    So as you can see, they’re pretty on to it with this sorta stuff.

    Now the Bazzite team are also like the Cachy guys and keep up with this stuff, but because they’re based on Fedora, they can’t be as bleeding edge or as optimised as Arch. So it’s up to you - if you prefer stability, a primarily gaming-focused optimisations, and want something that “just works” then get Bazzite; or if you want an ultra-optimised distro to squeeze out the most performance out of your box but also don’t mind ocassionally diving into the terminal and getting your hands dirty, then get CachyOS.

    cc: @01189998819991197253@infosec.pub



  • Before y’all get excited, the press release doesn’t actually mention the term “open source” anywhere.

    Winamp will open up its code for the player used on Windows, enabling the entire community to participate in its development. This is an invitation to global collaboration, where developers worldwide can contribute their expertise, ideas, and passion to help this iconic software evolve.

    This, to me, reads like it’s going to be a “source available” model, perhaps released under some sort of a Contributor License Agreement (CLA). So, best to hold off any celebrations until we see the actual license.




  • You cannot go back after trying it

    I did! Used to have a Samsung 49" ultrawide. After using it for a couple of years, I sold it and got a 16:10 32" QHD, which I found worked better for me (+ one or two laptop screens for chat / random stuff when I’m doing serious work).

    The biggest issue I had with the ultrawide is that most of the games that I played weren’t optimised for it, especially in some games where things like the mini-map might be at the far end of the screen, or worse, if it was an older game then you’d have to put up with black bars, or play the game in windowed mode.




  • I’m not sure who this Chris Titus is, but I can’t believe there’s no mention of Bazzite in that infographic, which is surprising because it’s arguably the best distro for gaming right now (and a pretty decent newbie-friendly distro too). It’s also surprising there’s no mention of CachyOS, which is overall the best performing easy-to-install Linux distro right now (although since it’s based on Arch, I wouldn’t recommend it for newbies).

    So if I were you, I wouldn’t put too much faith in their video when they missed out on these two (and several other cool distros such as Bluefin, SecureBlue, AntiX etc).

    In saying that, nVidia on Linux sucks in general, so I second @ulkesk@beehaw.org’s suggestion and recommend getting an AMD instead - it’s so much more nicer and hassle-free, not having to deal with any proprietary driver bs, and having a smooth Wayland experience.



  • Since you’re on Linux, it’s just a matter of installing the right packages from your distros package manager. Lots of articles on the Web, just google your app + “ROCm”. Main thing you gotta keep in mind is the version dependencies, since ROCm 6.0/6.1 was released recently, some programs may not yet have been updated for it. So if your distro packages the most recent version, your app might not yet support it.

    This is why many ML apps also come as a Docker image with specific versions of libraries bundled with them - so that could be an easier option for you, instead of manually hunting around for various package dependencies.

    Also, chances are that your app may not even know/care about ROCm, if it just uses a library like PyTorch / TensorFlow etc. So just check it’s requirements first.

    As for AMD vs nVidia in general, there are a few places mainly where they lagged behind: RTX, compute and super sampling.

    • For RTX, there has been improvements in performance with the RDNA3 cards, but it does lag behind by a generation. For instance, the latest 7900 XTX’s RTX performance is equivalent to the 3080.

    • Compute is catching up as I mentioned earlier, and in some cases the performance may even match nVidia. This is very application/library specific though, so you’ll need to look it up.

    • Super Sampling is a bit of a weird one. AMD has FSR and it does a good job in general. In some cases, it may even perform better since it uses much simpler calculations, as opposed to nVidia’s deep learning technique. And AMD’s FSR method can be used with any card in fact, as long as the game supports it. And therein lies the catch, only something like 1/3rd of the games out there support it, and even fewer games support the latest FSR 3. But there are mods out there which can enable FSR (check Nexus Mods) that you might be able to use. In any case, FSR/DLSS isn’t a critical thing, unless you’re gaming on a 4K+ monitor.

    You can check out Tom’s Hardware GPU Hierarchy for the exact numbers - scroll down halfway to read about the RTX and FSR situation.

    So yes, AMD does lag behind in nVidia but whether this impacts you really depends on your needs and use cases. If you’re a Linux user though, getting an AMD is a no-brainer - it just works so much better, as in, no need to deal with proprietary driver headaches, no update woes, excellent Wayland support etc.



  • And this is one of the reasons why I don’t like 'em. They’re way too overengineered, IMO. Which is weird because so many mk enthusiasts prefer minimal setups. In my case for instance, I just have a braided Type-C cable running straight from my board to the back of my desk. Just a simple, straight line. Easy to connect/disconnect/clean/maintain/replace. Minimal. I personally don’t see why/how an aviator cable could improve either the aesthetics or the functionality. In fact, I can only think of downsides.


  • It’s not “optimistic”, it’s actually happening. Don’t forget that GPU compute is a pretty vast field, and not every field/application has a hard-coded dependency on CUDA/nVidia.

    For instance, both TensorFlow and PyTorch work fine with ROCm 6.0+ now, and this enables a lot of ML tasks such as running LLMs like Llama2. Stable Diffusion also works fine - I’ve tested 2.1 a while back and performance has been great on my Arch + 7800 XT setup. There’s plenty more such examples where AMD is already a viable option. And don’t forget ZLUDA too, which is being continuing to be improved.

    I mean, look at this benchmark from Feb, that’s not bad at all:

    And ZLUDA has had many improvements since then, so this will only get better.

    Of course, whether all this makes an actual dent in nVidia compute market share is a completely different story (thanks to enterprise $$$ + existing hw that’s already out there), but the point is, at least for many people/projects - ROCm is already a viable alternative to CUDA for many scenarios. And this will only improve with time. Just within the last 6 months for instance there have been VAST improvements in both ROCm (like the 6.0 release) and compatibility with major projects (like PyTorch). 6.1 was released only a few weeks ago with improved SD performance, a new video decode component (rocDecode), much faster matrix calculations with the new EigenSolver etc. It’s a very exiting space to be in to be honest.

    So you’d have to be blind to not notice these rapid changes that’s really happening. And yes, right now it’s still very, very early days for AMD and they’ve got a lot of catching up to do, and there’s a lot of scope for improvement too. But it’s happening for sure, AMD + the community isn’t sitting idle.





  • Considering that predicting the next word from context is the one thing LLMs are really good at, I just don’t understand how none of these developments have found their way into predictive keyboards.

    The problem is that LLMs require a considerable amount of computing power to run, unlike the simple markov chain predictions that keyboards use. You could use a cloud-based service like ChatGPT or something, but most people wouldn’t want their keyboards to send all their keystrokes to a remote server… and even if they didn’t know or care, the response time wouldn’t be good enough for real-time predictions.

    Now smartphone SoC makers like Qualcomm have started adding NPUs (neural processing units) with their latest chips (such as the SD8 Gen 3, featured in the most recent flagship phones), but it’s going to take a while before devices with NPUs become commonplace, and it’ll take a while for developers to start making/updating apps that can make use of it.

    But yeah the good news is that it is coming, it’s only a matter of “when” - I suspect it won’t be long before the likes of SwiftKey start to take advantage of this.