this post was submitted on 08 Apr 2024
196 points (96.7% liked)

Linux

48230 readers
628 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
all 43 comments
sorted by: hot top controversial new old
[–] simple@lemm.ee 129 points 7 months ago

An article about Nvidia in the Linux community? Surely all the comments will be productive and discuss the topic at hand.

Clueless

[–] RandomLegend@lemmy.dbzer0.com 81 points 7 months ago (5 children)

Too little too late.

Already sold my 3070 and went for an 7900 XT bcs i got fed up with NVidia being lazy

[–] morrowind@lemmy.ml 25 points 7 months ago (1 children)

Good. This is the better overall solution

[–] RandomLegend@lemmy.dbzer0.com 11 points 7 months ago (1 children)

well i dearly miss CUDA as i don't get ZLUDA to work properly with Stable Diffusion and FSR is sill leagues behind DLSS... but yeah overall i am very happy

[–] possiblylinux127@lemmy.zip 1 points 7 months ago (1 children)

You can run Ollama with AMD acceleration

[–] RandomLegend@lemmy.dbzer0.com 10 points 7 months ago (1 children)
  1. yes i know, but Cuda is faster
  2. Ollama is for LLM, Stable Diffusion is for images
[–] possiblylinux127@lemmy.zip 3 points 7 months ago (1 children)

I'm aware I wanted to point out that AMD isn't totally useless in AI.

[–] RandomLegend@lemmy.dbzer0.com 2 points 7 months ago

Oh it definetly isn't

Everything I need does run and I finally don't run out of vram so easily 😅

[–] CrabAndBroom@lemmy.ml 14 points 7 months ago

I had to update my laptop about two years ago and decided to go full AMD and it's been awesome. I've been running Wayland as a daily driver the whole time and and I don't even really notice it anymore.

[–] 30p87@feddit.de 9 points 7 months ago (1 children)

Even now, choosing between a free 4090 or a free 7900 XTX would be easy.

[–] RandomLegend@lemmy.dbzer0.com 10 points 7 months ago (2 children)

It totally depends on your usecase.

NVidia runs 100% rocksolid on X11.

If you're someone who reallse uses CUDA and all their stuff and don't care about Wayland. NVidia is the choice you have to make. Simple as that.

If you don't care about those things or are willing to sacrifice time and tinker around with AMDs subpar alternatives, AMD is the way to go.

Because let's face it. AMD didn't care about machine learning stuff and they only now begin to dabble in it. They lost a huge amount of people who work with those things as their day job. They can't tell their bosses and/or clients that they can't work for a week or two until they figured out how to get this alternative running that is just starting to care about that field of work.

[–] 30p87@feddit.de 6 points 7 months ago

Luckily the only way I'm gonna use ML is on my workstation server, which will have it's Quadro M2000 replaced/complemented by my GTX 1070 once I have an AMD GPU in my main PC, because on that I mainly care about running games in 4k, with high settings but without much Raytracing, on Wayland.

[–] kata1yst@sh.itjust.works -1 points 7 months ago (1 children)

These hypothetical people should use Google Colab or similar services for ML/AI, since it's far cheaper than owning a 4090 or an a100.

[–] RandomLegend@lemmy.dbzer0.com 4 points 7 months ago (1 children)

These absolutely not hypothetical people should absolutely NOT be using Google Colab.

Keep your data to yourself, don't run shit in the cloud that can be run offline.

[–] kata1yst@sh.itjust.works -1 points 7 months ago

Exactly what data are you worried about giving to Colab?

[–] Andrenikous@lemm.ee 7 points 7 months ago

If the day comes I want to upgrade my 3080 I’ll switch to an AMD solution but until then I’ll take any improvement I can get from Nvidia.

[–] Bulletdust@lemmy.ml -3 points 7 months ago

I don't believe Nvidia were the one's being lazy in this regard, they submitted the merge request for explicit sync quite some time ago now. Wayland devs essentially took their sweet time merging the code.

[–] Catsrules@lemmy.ml 36 points 7 months ago (3 children)

Explicit Sync sounds like some kind of porn syncing program.

[–] PseudoSpock@lemmy.dbzer0.com 7 points 7 months ago

That's why it's better.

[–] eveninghere@beehaw.org 4 points 7 months ago

It's like Apple syncs videos with explicit lyrics in Apple Music when you play a song.

[–] merthyr1831@lemmy.world 1 points 7 months ago

and now Nvidia users can use it on wayland! 🦀

[–] pmk@lemmy.sdf.org 29 points 7 months ago (2 children)

I will never buy anything with Nvidia again.

[–] cbarrick@lemmy.world 46 points 7 months ago (2 children)

Unfortunately, those of us doing scientific compute don't have a real alternative.

ROCm just isn't as widely supported as CUDA, and neither is Vulkan for GPGPU use cases.

AMD dropped the ball on GPGPU, and Nvidia is eating their lunch. Linux desktop users be damned.

[–] TropicalDingdong@lemmy.world 10 points 7 months ago (1 children)

yep yep and yep.

and they've been eating their lunch so long at this point I've given up on that changing.

The new world stands in cuda and that's just the way it is. I don't really want an nVidia, radeon seems far better for price to performance . Except I can justify an nVidia for work.

I can't justify a radeon for work.

[–] cbarrick@lemmy.world 11 points 7 months ago (2 children)

Long term, I expect Vulkan to be the replacement to CUDA. ROCm isn't going anywhere...

We just need fundamental Vulkan libraries to be developed that can replace the CUDA equivalents.

  • cuFFT -> vkFFT (this definitely exists)
  • cuBLAS -> vkBLAS (is anyone working on this?)
  • cuDNN -> vkDNN (this definitely doesn't exist)

At that point, adding Vulkan support to XLA (Jax and TensorFlow) or ATen (PyTorch) wouldn't be that difficult.

[–] DarkenLM@kbin.social 18 points 7 months ago

wouldn't be that difficult.

The amount of times I said that only to be quickly proven wrong by the fundamental forces of existence is the reason that's going to be written on my tombstone.

[–] TropicalDingdong@lemmy.world 3 points 7 months ago (1 children)

I think. it's just path stickiness at this point. CUDA works and then you can ignore it's existence and do the thing you actually care about. ML in the pre CUDA days was painful. CUDA makes it not painful. Asking people to return to painfully..

Good luck..

[–] cbarrick@lemmy.world 8 points 7 months ago* (last edited 7 months ago) (1 children)

Yeah, but I want both GPU compute and Wayland for my desktop.

[–] ManniSturgis@lemmy.zip 2 points 7 months ago

Hybrid graphics. Works for me.

[–] urbanxs@lemmy.ml 4 points 7 months ago (1 children)

I find it eerly odd how amd seems to almost intetionally stay out nvidia’s way in terms of cuda and couple other things. I dont wish to speculate but considering how ai is having a blowout yet AMD is basically not even trying, it feels as if the nvidia ceo beying cousins with amd’s ceo has something to do with it. Maybe i am reading too much into it but there’s something going on. Why would amd leave so much money on the table?

Bubbles tend to pop sometimes.

[–] EccTM@lemmy.ml 16 points 7 months ago (1 children)

Thats great.

I'd still like my Nvidia card to work so I'm happy about this, and when AMD on Linux eventually starts swapping over to explicit sync, I'll be happy for those users then too.

[–] possiblylinux127@lemmy.zip -1 points 7 months ago (1 children)

AMD on Linux doesn't need explicit sync

[–] DumbAceDragon@sh.itjust.works 3 points 7 months ago

Cool. It should still use it though. If for nothing else than the parallelization improvements it allows.

If we stuck with the "it works fine so I'm not moving away from it" approach then we'd all still be on x11. Nvidia sucks and they should be more of a team player, but I think they were right to push for explicit sync over implicit. We should've been doing this from the beginning on wayland.

[–] umbrella@lemmy.ml 25 points 7 months ago

hey look, the yearly "nvidia is finally fixing wayland support" post!

[–] Bulletdust@lemmy.ml 10 points 7 months ago* (last edited 7 months ago)

Now all they need is a complete nvidia-settings application under Wayland that allows for coolbits to be set, and I may be able to use Wayland. For some reason, my RTX 2070S boosts far higher than the already overclocked from factory boost clocks, resulting in random crashing - I have to use GWE to limit boost clocks to OEM specs to prevent crashing.

Strangely enough, this was never a problem under Windows.

[–] Socsa@sh.itjust.works 9 points 7 months ago

Will this make my terminal faster?

[–] lemmyvore@feddit.nl 3 points 7 months ago (1 children)

It will not though. Explicit sync is not a magic solution, it's just another way of syncing GPU work. Unlike implicit sync it needs to be implemented by every part of the graphical stack. Just because Nvidia is implementing it will not solve issues with compositors not having it, and graphical libraries not having it, and apps not supporting it, and so on and so forth. It's a step in the right direction but it won't fix everything overnight like some people think.

Also it's silly that this piece mentions Wayland and Nvidia because (1) Wayland doesn't implement sync of any kind, they probably meant to say "the Wayland stack" and (2) Nvidia is not the only driver that needs to implement explicit sync.

[–] visor841@lemmy.world 67 points 7 months ago (1 children)

will not solve issues with compositors not having it

Many compositors already have patches for explicit sync which should get merged fairly quickly.

graphical libraries not having it

Both Vulkan and OpenGL have support for explicit sync

apps not supporting it

Apps don't need to support it, they just need to use Vulkan and OpenGL, and they will handle it.

Wayland doesn't implement sync of any kind, they probably meant to say "the Wayland stack"

Wayland has a protocol specifically for explicit sync, it's as much a part of Wayland as pretty much anything else that's part of Wayland.

Nvidia is not the only driver that needs to implement explicit sync.

Mesa has already merged explicit sync support.

[–] Hadriscus@lemm.ee 23 points 7 months ago

so basically every single statement was incorrect ? lol