!lemmySilver
Programmer Humor
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
all programs are single threaded unless otherwise specified.
It’s safe to assume that any non-trivial program written in Go is multithreaded
But it's still not a guarantee
Definitely not a guarantee, bad devs will still write bad code (and junior devs might want to let their seniors handle concurrency).
And yet: You’ll still be limited to two simultaneous calls to your REST API because the default HTTP client was built in the dumbest way possible.
Really? Huh, TIL. I guess I've just never run into a situation where that was the bottleneck.
I absolutely love how easy multi threading and communication between threads is made in Go. Easily one of the biggest selling points.
Key point: they're not threads, at least not in the traditional sense. That makes a huge difference under the hood.
Well, they're userspace threads. That's still concurrency just like kernel threads.
Also, it still uses kernel threads, just not for every single goroutine.
What I mean is, from the perspective of performance they are very different. In a language like C where (p)threads are kernel threads, creating a new thread is only marginally less expensive than creating a new process (in Linux, not sure about Windows). In comparison creating a new 'user thread' in Go is exceedingly cheap. Creating 10s of thousands of goroutines is feasible. Creating 10s of thousands of threads is a problem.
Also, it still uses kernel threads, just not for every single goroutine.
This touches on the other major difference. There is zero connection between the number of goroutines a program spawns and the number of kernel threads it spawns. A program using kernel threads is relying on the kernel's scheduler which adds a lot of complexity and non-determinism. But a Go program uses the same number of kernel threads (assuming the same hardware and you don't mess with GOMAXPROCS) regardless of the number of goroutines it uses, and the goroutines are cooperatively scheduled by the runtime instead of preemptively scheduled by the kernel.
Great details! I know the difference personally, but this is a really nice explanation for other readers.
About the last point though: I'm not sure Go always uses the maximum amount of kernel threads it is allowed to use. I read it spawns one on blocking syscalls, but I can't confirm that. I could imagine it would make sense for it to spawn them lazily and then keep around to lessen the overhead of creating it in case it's needed later again, but that is speculation.
Edit: I dove a bit deeper. It seems that nowadays it spawns as many kernel threads as CPU cores available plus additional ones for blocking syscalls. https://go.dev/doc/go1.5 https://docs.google.com/document/u/0/d/1At2Ls5_fhJQ59kDK2DFVhFu3g5mATSXqqV5QrxinasI/mobilebasic
Does Python have the ability to specify loops that should be executed in parallel, as e.g. Matlab uses parfor
instead of for
?
python has way too many ways to do that. asyncio
, future
, thread
, multiprocessing
...
Of the ways you listed the only one that will actually take advantage of a multi core CPU is multiprocessing
yup, that's true. most meaningful tasks are io-bound so "parallel" basically qualifies as "whatever allows multiple threads of execution to keep going". if you're doing numbercrunching in pythen without a proper library like pandas, that can parallelize your calculations, you're doing it wrong.
I’ve used multiprocessing to squeeze more performance out of numpy and scipy. But yeah, resorting to multiprocessing is a sign that you should be dropping into something like Rust or a C variant.
I've always hated object oriented multi threading. Goroutines (green threads) are just the best way 90% of the time. If I need to control where threads go I'll write it in rust.
Are you still using matlab? Why? Seriously
No, I'm not at university anymore.
Good for you
Poor prof
We weren't doing any ressource extensive computations with Matlab, mainly just for teaching FEM, as we've had an extensive collection of scripts for that purpose, and pre- and some post processing.
I was telling a colleague about how my department started using Rust for some parts of our projects lately. (normally Python was good enough for almost everything but we wanted to try it out)
They asked me why we're not using MATLAB. They were not joking. So, I can at least tell you their reasoning. It was their first programming language in university, it's safer and faster than Python, and it's quite challenging to use.
"Just use MATLAB" - Someone with a kind heart who has never deployed anything to anything
I think OP is making a joke about python's GIL, which makes it so even if you are explicitly multi threading, only one thread is ever running at a time, which can defeat the point in some circumstances.
I initially read this as “all programmers are single-threaded” and thought to myself, “yeah, that tracks”
Oh wow, a programming language that is not supposed to be used for every single software in the world. Unlike Javascript for example which should absolutely be used for making everything (horrible). Nodejs was a mistake.
don't worry it'll use all the RAM anyway
No RAM gets wasted!
Do you mean Synapse the Matrix server? In my experience, Conduit is much more efficient.
i wish they would switch the reference implementation to conduit
there is core components on the client side in rust so maybe that's the way for the future
I tough this was about excel and was like yeah haha!
But is about Python, so I'm officially offended.
I prefer this default. Im sick of having to rein in Numba cores or OpenBlas threads or other out of control software that immediately tries to bottleneck my stack.
CGroups (Docker/LXC) is the obvious solution, but it shouldn't have to be
Python
..so.. so you made it single threaded?
I'll be honest, this only matters when running single services that are very expensive. it's fine if your program can't be pararlelized if the OS does its job and spreads the love around the cpus
It only took us how many years?