this post was submitted on 02 Apr 2024
52 points (90.6% liked)

Asklemmy

43856 readers
2140 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

For anyone who knows.

Basically, it seems to me like the technology in mobile GPUs is crazier than desktop/laptop GPUs. Desktop GPUs obviously can do things better graphically, but not by enough that it seems to need to be 100x bigger than a mobile GPU. And top end mobile GPUs actually perform quite admirably when it comes to graphics and power.

So, considering that, why are desktop GPUs so huge and power hungry in comparison to mobile GPUs?

top 21 comments
sorted by: hot top controversial new old
[–] Corngood@lemmy.ml 55 points 7 months ago

They are actually not that much bigger or different from mobile or game console GPUs, they just have a lot of cooling bolted to them. The cooling allows them to sacrifice efficiency, to be more power hungry and more powerful.

[–] Dudewitbow@lemmy.zip 41 points 7 months ago* (last edited 7 months ago) (1 children)

because its based on a curve. laptops have maybe 85% of the performance their desktop counterpart has, becauae that last 15% of performance is not power efficient.

you are also disregarding one MAJOR factor when comparing desktop and laptop gpus, noise.

laptop gpus, especially high end ones can sound like jet engines. large desktop gpus are large to minimalize noise it makes.

e. g my 7700S in my framework 16 can sound like a jet engine, the desktop equivalent of that, which is a 7600, is ridiculously power efficient and barely will make a noise because of the heatsink/die size ratio.

[–] gramathy@lemmy.ml 10 points 7 months ago* (last edited 7 months ago) (1 children)

Also the laptop gpus tend to have less or β€œworse” memory for a variety of reasons (lower resolution screens means less need for VRAM or processing powe, lower power GDDR, lower RAM clocks, etc. That 85% number works in more than just straight rendering throughput

[–] Dudewitbow@lemmy.zip 2 points 7 months ago (1 children)

i wouldnt necessarily say that, there are times oems double ram capacity compared to their typical value on laptop, its just less common today than it used to because nvidia tax.

take for example back over a decade ago with maxwell, desktop 750tis on desktop were usually 2gb vram cards, even 1gb. on mobile, 860m/960m(the laptop equivalent) often had 4 gb vram varients. Laptop ram though will be clocked more conservatively.

[–] d3Xt3r@lemmy.nz -1 points 7 months ago (1 children)

Also, AMD APUs use your main RAM, and some systems even allow you to change the allocation - so you could allocate say 16GB for VRAM, if you've got 32GB RAM. There are also tools which allow you can run to change the allocation, in case your BIOS does have the option.

This means you can run even LLMs that require a large amount of VRAM, which is crazy if you think about it.

[–] Blaster_M@lemmy.world 1 points 7 months ago* (last edited 7 months ago)

Problem is, system RAM does not have anywhere near the bandwidth that dedicated VRAM does. You can run an AI model, but the performance will be 10x worse due to the bandwidth limits.

[–] teawrecks@sopuli.xyz 23 points 7 months ago* (last edited 7 months ago) (1 children)

Comparing actual physical chip size, a desktop GPU isn't 100x bigger than a mobile GPU, more in the range of 10x. What you're used to seeing is the large PCB to handle more I/O, plus the heat sink, fans, and plastic shroud. The heat sink is needed because, at the end of the day, a desktop GPU might be pulling 300W+ of power and that energy has to go somewhere. A phone GPU on the other hand is likely to max out somewhere around 5W of power, and a standard laptop might be around 15-30W, neither of which need nearly the surface area to dissipate the heat.

why are desktop GPUs so huge and power hungry in comparison to mobile GPUs?

Put simply, they're doing more calculations per unit of time. According to wikipedia, an Adreno 750 (high end phone GPU) is pushing ~5 TFLOPS (FP32), while an RTX4090 can push 82.58 TFLOPS (FP32). That's 82.58 / 5 = 16.516 times more operations per second. 16x the performance for 10x the chip size and ~100x the power. (Estimating cost is kinda difficult, but a 4090 is $1600 msrp, while according to this article the cost of a Snapdragon 8 gen 3 which has the Adreno as part of its SoC is ~$200. So the price of just the graphics is probably worth at least half that. So the cost is also ~16x, which means relatively similar FLOPS per dollar, before accounting for power usage).

If your question is "how does 100x the power justify 16x the performance?", think of it like a 90hp economy car vs a 1000hp sports car. If you are ok with accelerating 0 to 60 over the course of a minute, you can do that very efficiently and minimize your gas usage. But if you need to go 0 to 60 in <3s, there's only one way that's going to happen, and that's absolutely DUMPING energy into that engine as fast as possible. It's going to generate a lot of wasted heat, it's going to get awful gas mileage, but it will go as fast as mechanically possible (with the engine technology we currently have). And that's what a 4090 is doing. It might not be the best performance per watt, but if you need the performance it's simply your only option.

If your question is actually "why do mobile games look so good relative to the best looking high end AAA games?", that's called good art direction. With proper optimizations and shortcuts that make assumptions about time of day, camera angles, distance to objects, resolution, etc, you can render a pretty decent looking scene these days. But where it usually falls apart is dynamic lighting, because that requires more calculations per pixel. Notice you won't see many moving light sources, shadow casting, transitioning between times of day, or advanced materials in mobile games. What you do see was carefully and deliberately placed where you are most likely to notice, and shortcuts were taken in ways that you hopefully won't ever question it.

Since the the dawn of computer rendering, all of gaming, from low power to high, is about taking shortcuts to make as good looking of a scene as you can with the hardware you've got. And we've gotten pretty good at doing that, to the point that it's relatively difficult these days for the untrained eye to spot the difference.

[–] jet@hackertalks.com 3 points 7 months ago

Really well written and succinctly explained!

[–] Cypher@lemmy.world 14 points 7 months ago (3 children)

It’s a little amusing how many respondents thought mobile GPUs meant laptop GPUs despite it being clear in your post.

There are several factors at play from mobile GPUs being ARM based, having unified memory and some laws of physics meaning more size and power has diminishing returns.

Phone GPUs based are generally comparable to budget desktop GPUs on a per generation comparison.

Despite this mobile games tend to look amazing compared to what you would expect out of a PC game on low end hardware.

Part of this is optimisation, part of it is more efficient graphics libraries targeting a much lower range of hardware. Similar to how lower spec consoles often have great looking games, targeting only one hardware layout can allow for crazy optimisations.

See the PS3 era games for examples of really pushing hardware to its absolute limits for graphics.

Sadly my answer isn’t as technically detailed as Id like but it’s a complex topic when you really delve into it.

[–] Greyfoxsolid@lemmy.world 3 points 7 months ago

I appreciate the well thought out response!

[–] teawrecks@sopuli.xyz 1 points 7 months ago (1 children)

mobile GPUs being ARM based

Could you elaborate?

[–] Cypher@lemmy.world 3 points 7 months ago (1 children)

ARM is an instruction set similar to x86 however it is more power efficient, for a number of reasons.

It doesn’t help the confusion that ARM is a company and produces CPUs and GPUs but you can find the ARM instruction set in use on a wide range of SoC and other hardware.

It is popular for use cases where power efficiency is important.

For example Apple uses the ARM instruction set for their M serious which are a SoC containing CPU, GPU and memory.

SoC = System on a Chip.

[–] teawrecks@sopuli.xyz 3 points 7 months ago

I think you might be confusing the ARM instruction set with the ARM company. I don't have any insider knowledge, but I don't think the Mali GPU is based on the ARM instruction set.

[–] trolololol@lemmy.world 0 points 7 months ago (1 children)

To the best of my knowledge ARM company is not involved in making GPUs, and ARM CPUs don't influence performance of GPUs. Board and system architecture might though, such as unified memory, which could be part of the memory controller and physically Co exist with the CPU?

[–] Cypher@lemmy.world 1 points 7 months ago

ARM makes GPUs, which are primarily used in phones, and this isn’t hard to lookup.

The ARM instruction set which is an alternative to x86 is also another matter.

[–] j4k3@lemmy.world 9 points 7 months ago* (last edited 7 months ago)

I have the largest laptop GPU from the last generation; 3080Ti @ 16GB.

You must have very small fans moving a lot of air in a laptop. That means speed and speed is the primary cause of noise in fans. Larger desktop PC GPUs can have a larger heatmass sink and more/larger fans that run at a slower speed.

Aside from the noise, the laptop has a more complicated set of breakouts and interrupts for thermal and battery management. This means it may have different firmware/software support and issues if you do higher risk types of activities like messing with the clock rate.

One example I can give is when using AI on my laptop with various models I am able to split between CPU and GPU for inference. The thermal performance will impact throttling. I must balance both the workload and the thermals to maximize the inference speed for things like Low Rank Adaptors training where I need to run a model for a long time at near maximum output for my hardware. If there is more load on either, the shared thermal management will throttle sooner. Indeed I wrote a script to monitor the CPU/GPU temps and memory usage every few seconds just to dial in this issue.

[–] HatchetHaro@lemmy.blahaj.zone 5 points 7 months ago

The main thing? Optimization. Mobile games are built for mobile, so naturally, graphical stuff like polygon-count, particle effects, texture resolutions, shadow quality, etc. are all toned down to be able to run smoothly on mobile hardware.

Couple that with the vastly smaller screen sizes and the diminishing returns of graphical power to visuals (e.g. visual jump from low shadows to medium is further than medium to high), and you're getting a fantastic mobile gaming experience for a tiny fraction of the power consumption.

[–] RedWeasel@lemmy.world 3 points 7 months ago

I just want to add, the Nvidia 4090 mobile gpu is the 4080 desktop chip, but at lower clocks and therefor better power efficiency. However it has much lower performance in comparison to a desktop 4090 and lower than the desktop 4080 as well.

[–] Dexx1s@lemmy.world 3 points 7 months ago* (last edited 7 months ago)

Basically, it seems to me like the technology in mobile GPUs is crazier than desktop/laptop GPUs.

It's not. They have the same software technologies and the desktop counterparts have better hardware.

but not by enough that it seems to need to be 100x bigger than a mobile GPU.

Yes it is. No benchmarks would agree with you here. Also, just look at the power draw for each and how much noise each cooling solution makes.

And top end mobile GPUs actually perform quite admirably when it comes to graphics and power.

Depends entirely on what you see as admirable. Power efficiency wise, they're great, but their performance isn't anything to write home about, especially considering that they typically share cooling solutions with the CPU. And that's at the top of the line. Lower down, it's not all that great, with desktop counterparts having much better 1% lows when the power is more comparable.

So with most of what you said being incorrect, your conclusion is also incorrect. Generally, more surface area on coolers means they can cool higher power limits, can have bigger fans and/or have those fans spin slower so they're much quieter. Regarding the power consumption, it's simply diminishing returns. Mobile GPUs are just cut sooner on the graph.

[–] aluminium@lemmy.world 1 points 7 months ago

I don't really see it. Like I expected the iOS Resident Evil 4 remake to perform about the same as the PS4 Version but its way worse.