the_dunk_tank
It's the dunk tank.
This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.
Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.
Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.
Rule 3: No sectarianism.
Rule 4: TERF/SWERFs Not Welcome
Rule 5: No ableism of any kind (that includes stuff like libt*rd)
Rule 6: Do not post fellow hexbears.
Rule 7: Do not individually target other instances' admins or moderators.
Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml
Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again
view the rest of the comments
Can someone explain to me about the human brain or something? I've always been under the impression that it's kinda like the neural networks AIs use but like many orders of magnitude more complex. ChatGPT definitely has literally zero consciousness to speak of, but I've always thought that a complex enough AI could get there in theory
Yeah there's some ideas about there clearly being a difference in that the brain isn't feed-forward like these algorithms are. The book I Am a Strange Loop is a great read on the topic of consciousness. But I bet these models hit a massive plateau as the pump them full of bigger, shitter data. Who knows if we'll ever achieve any actual parity between human and ai experience.
at some point they started incorporating recursive connection topologies. but the model of the neuron itself hasn't changed very much and it's a deeply simplistic analogy that to my knowledge hasn't been connected to actual biology. I'll be more impressed when they're able to start emulating the structures and connective topologies actually found in real animals, producing a functioning replica. until they can do that, there's no hope of replicating anything like human cognition.
If you read the current literature on the science of consciousness, the reality is that the best we can do is use things like neuroscience and psychology to rule out a couple previously prominent theories of how consciousness probably works. Beyond that, we’re still very much in the philosophy stage. I imagine we’ll eventually look back on a lot of current metaphysics being written and it will sound about as crazy as “obesity is caused by inhaling the smell of food”, which was a belief of miasma theory before germ theory was discovered.
That said, speaking purely in terms of brain structures, the math the most LLMs do is not nearly complex enough to model a human brain. The fact that we can optimize an LLM for its ability to trick our pattern recognition into perceiving it as conscious does not mean the underlying structures are the same. Similar to how film will always be a series of discrete pictures that blur together into motion when played fast enough. Film is extremely good at tricking our sight into perceiving motion. That doesn’t mean I’m actually watching a physical Death Star explode every time A New Hope plays.
I suppose I already figured that we can't make a neural network equivalent to a human brain without a complete understanding of how our brains actually work. I also suppose there's no way to say anything certain about the nature of consciousness yet.
So I guess I should ask this follow up question: Is it possible in theory to build a neural network equivalent to the absolutely tiny brain and nervous system any given insect has? Not to the point of consciousness given that's probably unfalsifiable, also not just an AI trained to mimic an insect's behavior, but a 1:1 reconstruction of the 100,000 or so brain cells comprising the cognition of relatively small insects? And not with an LLM, but instead some kind of new model purpose built for this kind of task. I feel as though that might be an easier problem to say something conclusive about.
The biggest issue I can think of with that idea is the neurons in neural networks are only superficially similar to real, biological neurons. But that once again strikes me as a problem of complexity. Individual neurons are probably much easier to model somewhat accurately than an entire brain is, although still nowhere near our reach. If we manage to determine this is possible, then it would seemingly imply to me that someday in the future we could slowly work our way up the complexity gradient from insect cognition to mammalian cognition.
IIRC it's been tried and they utterly failed. part of the problem is that "the brain" isn't just the central nervous system -- a huge chunk of relevant nerves are spread through the whole body and contribute to the function of the whole body, but they're deeply specialized and how they actually work is not yet well studied. in humans, a huge percentage of our nerve cells are actually in our gut and another meaningful fraction spread through the rest of the body. basically, sensory input comes extremely preprocessed to the brain and some amount of memory isn't stored centrally. and that's all before we even talk about how little we know about how neurons actually work -- the last time I was reading about this (a decade or so ago) there was significant debate happening about whether real processing even happened in the neurons or whether it was all in the connective tissue, with the neurons basically acting like batteries. the CS model of a neuron is just woefully lacking any real basis in biology except by a poorly understood analogy.
At a structural level there are some similarities, but a lot of the hype about how close it is is strictly marketing hype that some credulous computer touchers buy into.
I saw a lot of this for the first time during the LK-99 saga when the only active discussion on replication efforts was on r/singularity. For the past solid year or two before LK-99, all they'd been talking about were LLMs and other AI models. Most of them were utterly convinced (and betting actual money on prediction sites!) that we'd have a general AI in like two years and "the singularity" by the end of the decade.
At a certain point it hit me that the place was a fucking cult. That's when I stopped following the LK-99 story. This bunch of credulous rubes have taken a bunch of misinterpreted pop-science factoids and incoherently compiled them into a religion. I realized I can pretty safely disregard any hyped up piece of tech those people think will change the world.
They want to dismiss, ignore, or outright purge the knowledge, contributions, and dissent of everyone and anyone that doesn't nod along and say "agreed, everything is a computer program. Well memed, tech billionaire sirs. All that needs to be done is hack the program to win at everything forever!"
This thread has already got its own "in all fairness, everything is an algorithm" unsubstantiated and coercively presumptive claim to that effect.
While we're here, can I get an explanation on that one too? I think I'm having trouble separating the concept of algorithms from the concept of causality in that an algorithm is a set of steps to take one piece of data and turn it into another, and the world is more or less deterministic at the scale of humans. Just with the caveat that neither a complex enough algorithm nor any chaotic system can be predicted analytically.
I think I might understand it better with some examples of things that might look like algorithms but aren't.
An algorithm is:
For the sake of argument, let’s be real generous with the terms “unambiguous”, “sequence”, “goal”, and “recognizable” and say everything is an algorithm if you squint hard enough. It’s still not the end-all-be-all of takes that it’s treated as.
When you create an abstraction, you remove context from a group of things in order to focus on their shared behavior(s). By removing that context, you’re also removing the ability to describe and focus on non-shared behavior(s).So picking and choosing which behavior to focus on is not an arbitrary or objective decision.
If you want to look at everything as an algorithm, you’re losing a ton of context and detail about how the world works. This is a useful tool for us to handle complexity and help our minds tackle giant problems. But people don’t treat it as a tool to focus attention. They treat it as a secret key to unlocking the world’s essence, which is just not valid for most things.
Thanks for the help, but I think I'm still having some trouble understanding what that all means exactly. Could you elaborate on an example where thinking of something as an algorithm results in a clearly and demonstrably worse understanding of it?
Algorithmic thinking is often bad at examining aspects of evolution. Like the fact that crabs, turtles, and trees are all convergent forms that have each evolved multiple times through different paths. What is the unambiguous instruction set to evolve a crab? What initial conditions do you need for it to work? Can we really call the “instruction set” to evolve crabs “prescribed”? Prescribed by whom? Like, there’s a really common mental pattern with evolutionary thinking where we want to sort variations into meaningful and not-meaningful buckets, where this particular aspect of this variation was advantageous, whereas this one is just a fluke. Stuff like that. That’s much closer to algorithmic thinking than the reality where it is a truly random process and the only thing that makes it create coherent results is relative environmental stability over a really long period of time.
I would also guess that algorithmic thinking would fail to catch many aspects of ecological systems, but have thought less about that. It’s not that these subjects can’t gaining anything by looking at them through an algorithmic lens. Some really simple mathematical models of population growth are scarily accurate, actually. But insisting on only seeing them algorithmically will not bring you closer to the essence of these systems either.
Okay, I think I get it now. I see how one could really twist something like your evolution example every which way to make it look like an algorithm. Things like saying the process to crabs is prescribed by the environmental conditions selecting for crab like traits or whatever, but I can see how doing that is so overly broad as to be a useless way to analyze the situation.
One more thing: I don't know enough about algorithms to really say, but isn't it possible for an algorithm to produce wildly varying results from nearly identical inputs? Like how a double pendulum is analytically unpredictable. What's more, could the algorithmic nature of a system be entirely obscured as a result of it being composed of many associated algorithms linked input to output in a net, some of which may even be recursively linked? That looks to me like it could be a source of randomness and ambiguity in an algorithmic system that would be borderline impossible to sus out.
I think what you’re talking about starts to get into definitional differences between different fields, but regardless I think the answer to the underlying questions is “yes”. We can talk about a function’s “purity”, meaning that if a function is pure, it will always produce the same output for the same input and will not change the state of any other aspects of the system it exists within. This concept is different from chaotic systems like you’re discussing, where the “distance” between outputs tends to be large between inputs whose distance is small. So some computer systems have the properties you’re talking about because they’re impure. Others have them because they’re chaotic.
A lot of functions which are both pure and chaotic are used as pseudo-random number generators, meaning they will always produce the same number for a given seed, but are exceedingly difficult to predict. But creating perfectly chaotic systems is very difficult (maybe mathematically impossible? idr) and a lot of the math used in cryptography involves attacking functions by finding ways to reverse them efficiently, as well as finding ways to prevent those attacks.
But yes, all of the things you mentioned can be sources of complexity that can make things chaotic, but that doesn’t necessarily make them nondeterministic. A lot of chaotic systems are sensitive to things like the exact millisecond at which some function runs or other sources of userspace randomness like user input or resource usage. Meanwhile, a good chunk of nondeterministic behavior in software comes from asynchronous race conditions.
also, the word they actually mean is heuristic.
Can you say more things?
when you soften these words, what you're left with is a heuristic - a method that occasionally does what you expect but that's underspecified. it's a decision procedure where the steps aren't totally clear or that sometimes arrives at unexpected results because it fails to capture the underlying model of reality at play.
Oh this is a neat point. Thank you!
I'm not saying it isn't on some larger scale an impossible association to make as much as I'm saying it's presumptive and arrogant to say "in all fairness, it is" as if that's some indisputable claim.
I could just as easily say the human brain is just a sufficiently complex series of vacuum tubes. Or gears and clockwork. Or wheels. Same reductionist summary attempt, same omission of extraordinary evidence to cover the extraordinary claim.
Don't they also have evidence to suggest the brain pathways work in ways that we just can't understand. I've seen 4th dimension processing thrown around I think to do with smell or something (quantum tunneling needed for smell to work or something physics based I can't remember) to describe how certain process and understanding happen parallel or even before stimulus suggesting the brain can make connections in other ways than just neurons and electric current or something like that.
So like to compare the brain to a computer is completely reductionist and computer touchers love it lol.
I realise this comment is just "here's a load of patchy second hand stuff I've heard misrepresented, any idea what I'm talking about cos I fucking don't lmao" so apologies.
There's enough that's unclear or not yet understood for me to say that claiming "in all fairness" that the human brain is just an algorithm/meat-computer/whatever is lazy and arrogant reductionism, that's for sure.
Someone with a hammer, as the saying goes, thinks everything is a nail.
And a computer toucher wants everything to be neatly fitting into computer programming.
It was a neat analogy for the time.
Kinda like how "I'm a woman trapped inside a man's body" is reductionist for explaining the trans femme experience and transness academically but if you want a quick snappy one liner it can deliver a simple explanation to reduce wild misunderstanding in a layman but anyone who's a redditor will take it too far.
I can see it being comparable.
I take issue with the extraordinary and reductionist claim of that's all it is, prefaced with "in all fairness" as if to will computer toucher feelings into hard irrefutable reality. It's like when someone says "let's be honest" to preface an opinion where it isn't just their "honest" opinion but an implication that it should be everyone's "honest" opinion if they are "honest" as well.
speaking of 4th dimensional processing, https://en.wikipedia.org/wiki/Holonomic_brain_theory is pretty interesting imo
Oh it might have being this I heard getting chatted about 👀
no because the human brain is far more complicated and we don't know how it works
That's pretty much the current thinking in mainstream neuroscience, becuase neural networks vaguely sort of mirror what we think at least some neurons in human brains do. The reality is nobody has any good evidence. It may be if ChatGPT get ten jillion more nodes it'd be like a thinking brain, but it's probably likely there are hundreds more factors involved than just more neurons.