77

the-podcast guy recently linked this essay, its old, but i don't think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin

you are viewing a single comment's thread
view the rest of the comments
[-] TraumaDumpling@hexbear.net 9 points 5 months ago* (last edited 5 months ago)

here are some more relevant articles for consideration from a similar perspective, just so we know its not literally just one guy from the 80s saying this. some cite this article as well but include other sources. the authors are probably not 'based' in a political sense, i do not condone the people but rather the arguments in some parts of the quoted segments.

https://medium.com/@nateshganesh/no-the-brain-is-not-a-computer-1c566d99318c

Let me explain in detail. Go back to the intuitive definition of an algorithm (remember this is equivalent to the more technical definition)— “an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.” Now if we assume that the input and output states are arbitrary and not specified, then time evolution of any system becomes computing it’s time-evolution function, with the state at every time t becoming the input for the output state at time (t+1), and hence too broad a definition to be useful. If we want to narrow the usage of the word computers to systems like our laptops, desktops, etc., then we are talking about those systems in which the input and output states are arbitrary (you can make Boolean logic work with either physical voltage high or low as Boolean logic zero, as long you find suitable physical implementations) but are clearly specified (voltage low=Boolean logic zero generally in modern day electronics), as in the intuitive definition of an algorithm….with the most important part being that those physical states (and their relationship to the computational variables) are specified by us!!! All the systems that we refer to as modern day computers and want to restrict our usage of the word computers to are in fact our created by us(or our intelligence to be more specific), in which we decide what are the input and output states. Take your calculator for example. If you wanted to calculate the sum of 3 and 5 on it, it is your interpretation of the pressing of the 3,5,+ and = buttons as inputs, and the number that pops up on the LED screen as output is what allows you interpret the time evolution of the system as a computation, and imbues the computational property to the calculator. Physically, nothing about the electron flow through the calculator circuit makes the system evolution computational. This extends to any modern day artificial system we think of as a computer, irrespective of how sophisticated the I/O behavior is. The inputs and output states of an algorithm in computing are specified by us (and we often have agreed upon standards on what these states are eg: voltage lows/highs for Boolean logic lows/highs). If we miss this aspect of computing and then think of our brains as executing algorithms (that produce our intelligence) like computers do, we run into the following -

(1) a computer is anything which physically implements algorithms in order to solve computable functions.

(2) an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.

(3) the specific input and output states in the definition of an algorithm and the arbitrary relationship b/w the physical observables of the system and computational states are specified by us because of our intelligence,which is the result of…wait for it…the execution of an algorithm (in the brain).

Notice the circularity? The process of specifying the inputs and outputs needed in the definition of an algorithm, are themselves defined by an algorithm!! This process is of course a product of our intelligence/ability to learn — you can’t specify the evolution of a physical CMOS gate as a logical NAND if you have not learned what NAND is already, nor capable of learning it in the first place. And any attempt to describe it as an algorithm will always suffer from the circularity.

https://www.theguardian.com/science/2020/feb/27/why-your-brain-is-not-a-computer-neuroscience-neural-networks-consciousness

And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.

In 2017, the French neuroscientist Yves Frégnac focused on the current fashion of collecting massive amounts of data in expensive, large-scale projects and argued that the tsunami of data they are producing is leading to major bottlenecks in progress, partly because, as he put it pithily, “big data is not knowledge”.

The neuroscientists Anne Churchland and Larry Abbott have also emphasised our difficulties in interpreting the massive amount of data that is being produced by laboratories all over the world: “Obtaining deep understanding from this onslaught will require, in addition to the skilful and creative application

https://www.forbes.com/sites/alexknapp/2012/05/04/why-your-brain-isnt-a-computer/?sh=3739800f13e1

Adherents of the computational theory of mind often claim that the only alternative theories of mind would necessarily involve a supernatural or dualistic component. This is ironic, because fundamentally, this theory is dualistic. It implies that your mind is something fundamentally different from your brain - it's just software that can, in theory, run on any substrate.

By contrast, a truly non-dualistic theory of mind has to state what is clearly obvious: your mind and your brain are identical. Now, this doesn't necessarily mean that an artificial human brain is impossible - it's just that programming such a thing would be much more akin to embedded systems programming rather than computer programming. Moreover, it means that the hardware matters a lot - because the hardware would have to essentially mirror the hardware of the brain. This enormously complicates the task of trying to build an artificial brain, given that we don't even know how the 300 neuron roundworm brain works, much less the 300 billion neuron human brain.

But looking at the workings of the brain in more detail reveal some more fundamental flaws with computational theory. For one thing, the brain itself isn't structured like a Turing machine. It's a parallel processing network of neural nodes - but not just any network. It's a plastic neural network that can in some ways be actively changed through influences by will or environment. For example, so long as some crucial portions of the brain aren't injured, it's possible for the brain to compensate for injury by actively rewriting its own network. Or, as you might notice in your own life, its possible to improve your own cognition just by getting enough sleep and exercise.

You don't have to delve into the technical details too much to see this in your life. Just consider the prevalence of cognitive dissonance and confirmation bias. Cognitive dissonance is the ability of the mind to believe what it wants even in the face of opposing evidence. Confirmation bias is the ability of the mind to seek out evidence that conforms to its own theories and simply gloss over or completely ignore contradictory evidence. Neither of these aspects of the brain are easily explained through computation - it might not even be possible to express these states mathematically.

What's more, the brain simply can't be divided into functional pieces. Neuronal "circuitry" is fuzzy and from a hardware perspective, its "leaky." Unlike the logic gates of a computer, the different working parts of the brain impact each other in ways that we're only just beginning to understand. And those circuits can also be adapted to new needs. As Mark Changizi points out in his excellent book Harnessed, humans don't have a portions of the brain devoted to speech, writing, or music. Rather, they're emergent - they're formed from parts of the brain that were adapted to simpler visual and hearing tasks.

If the parts of the brain we think of as being fundamentally human - not just intelligence, but self-awareness - are emergent properties of the brain, rather than functional ones, as seems likely, the computational theory of mind gets even weaker. Think of consciousness and will as something that emerges from the activity of billions of neural connections, similar to how a national economy emerges from billions of different business transactions. It's not a perfect analogy, but that should give you an idea of the complexity. In many ways, the structure of a national economy is much simpler than that of the brain, and despite that fact that it's a much more strictly mathematical proposition, it's incredibly difficult to model with any kind of precision.

The mind is best understood, not as software, but rather as an emergent property of the physical brain. So building an artificial intelligence with the same level of complexity as that of a human intelligence isn't a matter of just finding the right algorithms and putting it together. The brain is much more complicated than that, and is very likely simply not amenable to that kind of mathematical reductionism, any more than economic systems are.

[-] TraumaDumpling@hexbear.net 4 points 5 months ago

https://www.infoq.com/articles/brain-not-computer/

Given these facts, Jasanoff argues, you could build a chemistry-centric model of the brain with electrical signals of neurons facilitating the movement of chemical signals, instead of the other way around. The electrical signals could be viewed as part of a chemical process because of the ions they depend on. Glia cells affect the uptake of neurotransmitters which in turn affects neuron firing. From an evolutionary perspective, the chemical brain is no different than the chemical liver or kidneys.

An epigenetic understanding of dopamine, drug addiction, and depression focuses on the chemistry in the brain, not the electrical circuitry.

Our brains function just like the rest of our biological body, not as an abstraction of hardware and software components. To Jasanoff, there is no distinction between a mental event and a physical event in the body.

https://intellerts.com/sorry-your-brain-is-not-like-a-computer/

Humans rely on intuition, worldviews, thoughts, beliefs, our conscience. Machines rely on algorithms, which are inherently dumb. Here’s David Berlinski’s definition of an algorithm:

“An algorithm is a finite procedure, written in a fixed symbolic vocabulary, governed by precise instructions, moving in discrete steps, 1, 2, 3, . . ., whose execution requires no insight, cleverness, intuition, intelligence, or perspicuity, and that sooner or later comes to an end.”

But not every machine relies on dumb algorithms alone. Some machines are capable of learning. So, we must dive a little deeper to understand the inner workings of AI. I like this definition from John C. Lennox PhD, DPhil, Dsc – Professor of Mathematics (Emeritus) at the University of Oxford:

“An AI system uses mathematical algorithms that sort, filter and select from a large database.

The system can ‘learn’ to identify and interpret digital patterns, images, sound, speech, text data, etc.

It uses computer applications to statistically analyse the available information and estimate the probability of a particular hypothesis.

Narrow tasks formerly (normally) done by a human can now be done by an AI system. It’s simulated intelligence is uncoupled from conscience.”

Sort, filter and select. If you put it as simply as this, which to my opinion is the case, then you realize that AI is completely different from the human brain, let alone who we are as human beings.

[-] Frank@hexbear.net 5 points 5 months ago

You can build a computer out of anything that can flip a logic gate, up to and including red crabs. It doesn't matter if you're using electricity or chemistry or crabs. That's why it's a metaphor. This really all reads as someone arguing with a straw man who literally believes that neurons are logic gates or something. "Actually brains have chemistry" sounds like it's supposed to be a gotcha when people are out there working on building chemical computers, chemical data storage, chemical automata right now. There's no dichotomy there, nor does it argue against using computer terminology to discuss brain function. It just suggests a lack of creativity, flexibility, and awareness of the current state of the art in chemistry.

It's also apparently arguing with people who think chat-gpt and neural nets and llms are intelligent and sentient? In which case you should loudly specify that in the first line so people know you're arguing with ignorant fools and they can skip your article.

Humans rely on intuition, worldviews, thoughts, beliefs, our conscience. Machines rely on algorithms, which are inherently dumb. Here’s David Berlinski’s definition of an algorithm: “An algorithm is a finite procedure, written in a fixed symbolic vocabulary, governed by precise instructions, moving in discrete steps, 1, 2, 3, . . ., whose execution requires no insight, cleverness, intuition, intelligence, or perspicuity, and that sooner or later comes to an end.”

And what the hell is this? Jumping up and down and screaming "i have a soul! Consciousness is privileged and special! I'm not a meat automata i'm a real boy!" Is not mature or productive. This isn't an argument, it's a tantrum.

The deeper we get in to this it sounds like dumb guys arguing with dumb guys about reductive models of the mind that dumb guys think other dumb guys rigidly adhere to. Ranting about ai research without specifying whether you're talking about long standing research trends or the religious fanatics in California proseletyzing about their fictive machine gods isn't helpful.

[-] TraumaDumpling@hexbear.net 1 points 5 months ago* (last edited 5 months ago)

“an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.” Now if we assume that the input and output states are arbitrary and not specified, then time evolution of any system becomes computing it’s time-evolution function, with the state at every time t becoming the input for the output state at time (t+1), and hence too broad a definition to be useful. If we want to narrow the usage of the word computers to systems like our laptops, desktops, etc., then we are talking about those systems in which the input and output states are arbitrary (you can make Boolean logic work with either physical voltage high or low as Boolean logic zero, as long you find suitable physical implementations) but are clearly specified (voltage low=Boolean logic zero generally in modern day electronics), as in the intuitive definition of an algorithm….with the most important part being that those physical states (and their relationship to the computational variables) are specified by us!!! All the systems that we refer to as modern day computers and want to restrict our usage of the word computers to are in fact our created by us(or our intelligence to be more specific), in which we decide what are the input and output states. Take your calculator for example. If you wanted to calculate the sum of 3 and 5 on it, it is your interpretation of the pressing of the 3,5,+ and = buttons as inputs, and the number that pops up on the LED screen as output is what allows you interpret the time evolution of the system as a computation, and imbues the computational property to the calculator. Physically, nothing about the electron flow through the calculator circuit makes the system evolution computational.

you literally ignore the actual part of the text that adresses your problems.

you can use the word 'tantrum' while you ignore the literal words used and their meanings if you want but it only makes you seem illiterate and immature.

'intuition worldviews thoughts beliefs our conscience' are specific words with specific meanings. no computer (information processing machine) has 'consciousness', no computer has 'intuition', no computer has internal subjective experience - not even an idealized one with 'infinite processing power' like a turing machine. humans do. therefore humans are not computers. we cannot replicate 'intuition' with information processing, we cannot replicate 'internal subjective experience' with information processing. we cannot bridge the gap between subjective internal experience and objective external physical processes, not even hypothetically, there is not even a theoretical experiment you could design for it, there is not even theoretical language to describe it without metaphor. We could learn and simulate literally every single specific feature of the brain and it would not tell us about internal subjective experiences, because it is simply not the kind of phenomena that is understood by the field of information processing. If you have a specific take on the 'hard problem of consciousness' thats fine, but to say that 'anyone who disagrees with me about this is just stupid' is immature and ignorant, especially in light of your complete failure to understand things like Turing machines.

I usually like your posts and comments but this thread has revelaed a complete ignorance of the philosophical and theoretical concepts under discussion here and an overzealous hysteria regarding anything that is remotely critical of a mechanistic physicalist reductionist worldview. you literally ignore or glazed over any relevant parts of the text i quoted, misunderstood the basic nature of what a turing machine is, misunderstood the nature of the discourse around the brain-as-computer discourse, all with the smuggest redditor energy humanly possible. I will not be further engaging after this post and will block you on my profile, have a nice life.

[-] Frank@hexbear.net 2 points 5 months ago

Well, Traumadumpling isn't going to read this, so I'm just amusing myself.

we cannot bridge the gap between subjective internal experience and objective external physical processes, not even hypothetically, there is not even a theoretical experiment you could design for it, there is not even theoretical language to describe it without metaphor. We could learn and simulate literally every single specific feature of the brain and it would not tell us about internal subjective experiences, because it is simply not the kind of phenomena that is understood by the field of information processing.

This is all because subjectivity isn't falsifiable and is not currently something that the scientific method can interact with. As far as the scientific method is concerned it doesn't exist. idk why people are even interested in it, I don't see why it's important. The answer to "P-zombies" is that it doesn't matter and isn't interesting. If something performs all the observable functions of an intelligent mind with a subjective experience... well... it performs all the observable functions of an intelligent mind. Why are you interested in subjectivity if you can't evaluate whether it's even happening? You can't test it, you can't confirm or deny it. So just put it back in the drawer and move on with your life. It's not even a question of whether it does or doesn't exist. It's that the question isn't important or interesting. It has no practical consequences at all unless people, for cultural reasons, decide that something that performs the functions of an intelligent mind doesn't deserve recognition as a person because of their ingrained cultural belief3 in the existence and importance of a soul.

I do see this as directly tied to atheism. Part of making the leap to atheism and giving up on magic is admitting that you can't know, but based on what you can observe the gods aren't there. No one can find them, no one can talk to them, they never do anything. If there are transcendental magic people it's not relevant to your life.

Phenomenology is the same way. It just doesn't matter, and continuing to carry it around with you is an indication of immaturity, a refusal to let go and accept that some things are unknowable and probably always will be. Hammering on and on that we can't explain how subjectivity arises from physical processes doesn't change the facts on the ground; We've never observed anything but physical processes, and as such it is reasonable to assume that there is a process by which subjectivity emerges from the physical. Because there's nothing else. There's nothing else that could be giving rise to subjectivity. And, again, we don't know. Maybe there is a magic extradimensional puppeteer. But we don't know in the same sense that we don't know that the sun will rise tomorrow. It's one of the not particularly interesting problems with the theory of science - We assume that things that happened in the past predict things that will happen in the future. We do not, and cannot know if the sun will rise tomorrow. But as a practical matter it isn't important. With nothing else to explain the phenomena we observe, we can assume within the limits in which anything at all is predictable that the subjective experience is an emergent property of the crude, physical, boring, terrifyingly mortal meat.

More and more philosophy's dogged adherence to these ideas strikes me as an refusal to let go, to grow up, to embrace the unpredictable violence of a cold, hostile, meaningless universe. Instead of saying we don't and cannot know, and therefor it's not worth worrying about, philosophers cling to this security blanket of belief that we are, somehow, special. That we're unique and our existence has meaning and purpose. That we're different from the unthinking matter of stars or cosmic dust.

mechanistic physicalist reductionist worldview

https://en.wikipedia.org/wiki/Physicalism

Like this is just materialism. Physicalism isn't a belief, it's a scientific observation. We haven't found anything except the physical and as much as philosophers obsess about subjectivity and qualia and what have you those concepts, while mildly interesting intellectual topics, aren't relevant to science. You can't measure them, you cannot prove if they exist or do not exist. Maybe someday we'll have a qualia detector and we'll actually be able to do something with them, but right now they're not relevant. I'm a reductionist physicalist mechanist because I'm tired of hearing about ghosts and souls and magic. No question is being raised. There's no investigation that can proceed from these concepts. You can't do anything with them except yell at people who think, based on evidence, that physics is the only system that we can observe and investigate. And it's not "these things don't exist", it's whether they exist or not, we can't observe or interact with them so we can't do anything with them. You can't test qualia, you can't measure it. If we can some day, cool. But until then it's just... not useful.

AI is everywhere.

I didn't read the article, just commented on the excerpts. And when I do read the article this is the first line? Conflating LLMs and neural nets with AI? Accepting the tech bro marketing buzzword at face value?

Terms like “neural networks” certainly have not helped and, from Musk to Hawking, some of the greatest minds have propagated this myth.

Neural networks are called that because they're modeled on the behavior of neurons, not the other way around. Hawking could be a dork about some things but why put him in the same sentence as an ignorant buffoon like Musk?

Is what we're arguing here actually that psychologists and philosophers are yelling at tech bros because they think that neuroscientists using computer metaphors actually believe a seventy year old theory of cognition originating from psychology when psychologists were still mostly criminals and butchers?

Like saying the brain is a biological organ? That's not a gotcha when biological computers exist and research teams are encoding gigabytes of data, like computer readable data, 1s and 0s, as DNA. Whatever the brain is, we can build computers out of meat, we've done it, it works. There is no distinction between biological and machine, artifact and organ, meat and metal. It's an illusion of scale combined with, frankly, superstition. A living cell operates according to physical law just like everything else. It has a lot of componenents, some of them are very small, some of them we don't understand and I'm sure there are processes and systems we haven't identified, but all those pieces and processes and systems follow physical laws the same as everything else in creation. There's no spooky ghosts or quintessence down there.

Like, if the message here is to tell completely ignorant laypeople and tech bros who haven't read a book that wasn't about webdev that the brain does not literally have circuitry in it, fine, but say that. But right now we're very literally bridging the perceived gap between mechanical human artifacts and biology. We're building biological machines, biological computers. These are not completely different categories of things that can never be unified under a single theory to explain their function.

Let's take a step back, look at "Capitalism as a real god", what Marx called it, or "Holy shit capitalism is literally Cthulu" which is the formulation many people are independently arriving at these days. Capitalism is a gigantic system that emerges from the interactions of billions of humans. It's not located in any single human, or any subset of humans. It emerges from all of us, as we interact with each other and the world. There's no quintessence, no "subjectivity" that we could ever evaluate or interogate or find. We can't say whether capitalism has a subjective experience or cosciousness, whether there is an "I think therefore I am" drifting across the maddening complexity of financial transactions, commodity fetishism, resource extraction, and cooking dinner.

The brain has ~80 million neurons (plus glial matter I know I know bear with me). There are about 8 billion humans, and each of us is vastly more complex than a brain cell. So if humans actually are components in an emergent system that is intelligent and maybe self-aware, there's only one order of magnitude fewer humans than there are cells in a human brain that, given lack of any other explanations, we must assume give rise to a thinking mind.

Is it impossible for such a system to have a subjective experience? Is it a serious problem? As it stands we can't assess whether such subjectivity exists in the system, whether the system has something meaningfully resembling a human mind. The difference in experience is likely so vast as to be utterly unbridgeable. A super-organism existing on a global level would, likely, not be able to communicate with us due to lack of any shared referents or experiences at all. A totally alien being unlike us except that it emerges from the interaction of less complex systems, seeks homeostasis, and reacts to its environment.

But, like, who cares? Whether capitalism is a dumb system or an emergent intelligence there's nothing we can do about it. We can't investigate the question and an answer wouldn't be useful. So move along. Have your moment of existential horror and then get on with your life.

I think that's what really bothers me about this whole subjectivity, qualia, consciousness thing. It's boring. It's just... boring. Being stuck on it doesn't increase my knowledge or understanding. It doesn't open up new avenues of investigation.

The conclusion I'm coming to is this whole argument isn't about computers or brains or minds, but rather phenomenology having reached a dead end. It's a reaction to the discipline's descent in to irrelevance. The "Hard Problem of Consciousness" simply is not a problem.

[-] Abracadaniel@hexbear.net 2 points 5 months ago

Well said Frank, you're carrying this thread.

[-] Frank@hexbear.net 4 points 5 months ago* (last edited 5 months ago)

Almost all of this is people assuming other people are taking the metaphor to far.

The mind is best understood, not as software, but rather as an emergent property of the physical brain.

No one who is worth talking to about this disagrees with this. Everyone is running on systems theory now, including the computer programmers trying to build artificial intelligence. All the plagiarism machines run on systems theory and emergence. The people they're yelling at about reductive computer metaphors are doing the thing the author is saying they don't do, and the plagiarism machines were only possible because people were using systems theory and emergent behaviors arising from software to build the worthless things!

. The brain is much more complicated than that, and is very likely simply not amenable to that kind of mathematical reductionism, any more than economic systems are.

This author just said that economics isn't maths, that it's spooky and mysterious and can't be undersyood.

This is so frustrating. "You see, the brain isn't like this extremely reductive model of computation, it's actually" and then the author just lists every advance, invention, and field of inquiry in computation for the last several decades.

But looking at the workings of the brain in more detail reveal some more fundamental flaws with computational theory. For one thing, the brain itself isn't structured like a Turing machine. It's a parallel processing network of neural nodes - but not just any network. It's a plastic neural network that can in some ways be actively changed through influences by will or environment. For example, so long as some crucial portions of the brain aren't injured, it's possible for the brain to compensate for injury by actively rewriting its own network. Or, as you might notice in your own life, its possible to improve your own cognition just by getting enough sleep and exercise.

"The brain isn't a computer, it's actually a different kind of computer! The brain compensates for injury the same way the internet that was in some ways designed after the brain compensates for injury! If you provide the discrete nodes of a distributed network with the inputs they need to function efficiently the performance of the entire network improves!"

This is just boggling, what argument do they think they're making? Software does all these things specifically because scientists are investigating the functions of the brain and applying what they find to the construction of new computer systems. Our increasing understanding of the brain feeds back to novel computational models which generate new tools, data, and insight for understanding the brain!

[-] Tomorrow_Farewell@hexbear.net 2 points 5 months ago

"The brain isn't a computer, it's actually a different kind of computer! The brain compensates for injury the same way the internet that was in some ways designed after the brain compensates for injury! If you provide the discrete nodes of a distributed network with the inputs they need to function efficiently the performance of the entire network improves!"

Not even that. They literally did not provide any argument that brains are not structured like Turing machines. Hell, the author seems to not be aware of backup tools in hardware and software, including RAID.

[-] TraumaDumpling@hexbear.net 1 points 5 months ago* (last edited 5 months ago)

https://medium.com/the-spike/yes-the-brain-is-a-computer-11f630cad736

people are absolutely arguing that the human brain is a turing machine. please actually read the articles before commenting, you clearly didn't read any of them in any detail or understand what they are talking about. a turing machine isn't a specific type of computer, it is a model of how all computing in all digital computers work, regardless of the specific software or hardware.

https://en.wikipedia.org/wiki/Turing_machine

A Turing machine is a mathematical model of computation describing an abstract machine[1] that manipulates symbols on a strip of tape according to a table of rules.[2] Despite the model's simplicity, it is capable of implementing any computer algorithm.[3]

A Turing machine is an idealised model of a central processing unit (CPU) that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. Typically, the sequential memory is represented as a tape of infinite length on which the machine can perform read and write operations.

In the context of formal language theory, a Turing machine (automaton) is capable of enumerating some arbitrary subset of valid strings of an alphabet. A set of strings which can be enumerated in this manner is called a recursively enumerable language. The Turing machine can equivalently be defined as a model that recognises valid input strings, rather than enumerating output strings.

Given a Turing machine M and an arbitrary string s, it is generally not possible to decide whether M will eventually produce s. This is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing.

The Turing machine is capable of processing an unrestricted grammar, which further implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus.

A Turing machine that is able to simulate any other Turing machine is called a universal Turing machine (UTM, or simply a universal machine). Another mathematical formalism, lambda calculus, with a similar "universal" nature was introduced by Alonzo Church. Church's work intertwined with Turing's to form the basis for the Church–Turing thesis. This thesis states that Turing machines, lambda calculus, and other similar formalisms of computation do indeed capture the informal notion of effective methods in logic and mathematics and thus provide a model through which one can reason about an algorithm or "mechanical procedure" in a mathematically precise way without being tied to any particular formalism. Studying the abstract properties of Turing machines has yielded many insights into computer science, computability theory, and complexity theory.

[-] Tomorrow_Farewell@hexbear.net 3 points 5 months ago* (last edited 5 months ago)

(1) a computer is anything which physically implements algorithms in order to solve computable functions.
(2) an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.
(3) the specific input and output states in the definition of an algorithm and the arbitrary relationship b/w the physical observables of the system and computational states are specified by us because of our intelligence,which is the result of…wait for it…the execution of an algorithm (in the brain).
Notice the circularity? The process of specifying the inputs and outputs needed in the definition of an algorithm, are themselves defined by an algorithm!! This process is of course a product of our intelligence/ability to learn — you can’t specify the evolution of a physical CMOS gate as a logical NAND if you have not learned what NAND is already, nor capable of learning it in the first place. And any attempt to describe it as an algorithm will always suffer from the circularity.

This is a rather silly argument. People hear about certain logical fallacies and build cargo cults around them. They are basically arguing 'but how can conscious beings process their perception of material stuff if their consciousness is tied to material things???', or 'how can we learn about our bodies if we need our bodies to learn about them in the first place? Notice the circularity!!!'.
The last sentence there is a blatant non sequitur. They provide literally no reasoning for why a thing wouldn't be able to learn stuff about itself using algorithms.

[-] Frank@hexbear.net 4 points 5 months ago

This whole discussion is becoming more and more frustrating bc it's clear that most of the people arguing against the brain as computer don't grasp what metaphor is, have a rigid understanding of what computers are and cannot flex that understanding it to use it as a helpful basis of comparsion, and apparently have just never heard of or encountered systems theory?

Like a lot of these articles are going "nyah nyah nyah the mind can't be software running on brain hardware that's duaism you're actually doing magic just like us!" And it's like my god how are you writing about science and you've never encountered the idea of complex systems arising from the execution of simple rules? Like put your pen down and go play Conway's Game of Life for a minute and shut up about algorithms and logic gates bc you clearly can't even see the gaping holes in your own understanding of what is being discussed.

[-] TraumaDumpling@hexbear.net 1 points 5 months ago

literally read anything about a Turing Machine because you are comically misunderstanding these articles.

[-] TraumaDumpling@hexbear.net 1 points 5 months ago* (last edited 5 months ago)

please read the entire article, you are literally not understanding the text. the following directly addresses your argument.

“an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.” Now if we assume that the input and output states are arbitrary and not specified, then time evolution of any system becomes computing it’s time-evolution function, with the state at every time t becoming the input for the output state at time (t+1), and hence too broad a definition to be useful. If we want to narrow the usage of the word computers to systems like our laptops, desktops, etc., then we are talking about those systems in which the input and output states are arbitrary (you can make Boolean logic work with either physical voltage high or low as Boolean logic zero, as long you find suitable physical implementations) but are clearly specified (voltage low=Boolean logic zero generally in modern day electronics), as in the intuitive definition of an algorithm….with the most important part being that those physical states (and their relationship to the computational variables) are specified by us!!! All the systems that we refer to as modern day computers and want to restrict our usage of the word computers to are in fact our created by us(or our intelligence to be more specific), in which we decide what are the input and output states. Take your calculator for example. If you wanted to calculate the sum of 3 and 5 on it, it is your interpretation of the pressing of the 3,5,+ and = buttons as inputs, and the number that pops up on the LED screen as output is what allows you interpret the time evolution of the system as a computation, and imbues the computational property to the calculator. Physically, nothing about the electron flow through the calculator circuit makes the system evolution computational.

[-] Tomorrow_Farewell@hexbear.net 3 points 5 months ago

please read the entire article, you are literally not understanding the text.

Unless the author redefines the words used in the bit that you quoted from them, I addressed their argument just fine.
In the case the author does redefine those words, then the bit that you quoted is literally meaningless unless you also quote the parts where the author defines the relevant words.

“an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.”

The author is just arbitrarily placing on algorithms the requirement that they 'can be followed mechanically, with no insight required'. This is silly for a few reasons.
Firstly, that's not how algorithms are defined in mathematics, nor is that how they are understood in the context of relevant analogies. Going to just ignore the 'mechanically' part, as the author seems to not be explaining what they meant, and my interpretations are all broad enough to conclude that the author is obviously incorrect.
Secondly, brains perform various actions without any sort of insight required. This part should be obvious.
Thirdly, the author's problem is that computers usually work without some sort of introspection into how they perform their tasks, and that nobody builds computers that inefficiently access some random parts of memory vaguely related to their tasks. The introspection part is just incorrect, and the point about the fact that we don't make hardware and software that does inefficient 'insight' has no bearing on the fact that computers that do those things can be built and that they are still computers.

The author is deeply unserious.

Now if we assume that the input and output states are arbitrary and not specified, then time evolution of any system becomes computing it’s time-evolution function, with the state at every time t becoming the input for the output state at time (t+1), and hence too broad a definition to be useful

If their problem is that the analogy is not insightful, then fine. However, their thesis seems to be that the analogy is not applicable well enough, which is different from that.

If we want to narrow the usage of the word computers to systems like our laptops, desktops, etc.

Okay, so their thesis is not that the computer analogy is inapplicable, but that we do not work exactly the way PCs work? Sure.
I don't know why they had to make bad arguments regarding algorithms, though.

you can make Boolean logic...

There is no such thing as 'Boolean logic'. There is 'Boolean algebra', which is an algebraisation of logic.
The author also seems to assume that computers can only work with classical logic, and not any other sort of logic, for which we can implement suitable algebraisations.

with the most important part being that those physical states (and their relationship to the computational variables) are specified by us!!!

This is silly. The author is basically saying 'but all computers are intelligently made by us'. Needless to say, they are deliberately misunderstanding what computers are and are placing arbitrary requirements for something to be considered a computer.

All the systems that we refer to as modern day computers and want to restrict our usage of the word computers to

Who is this 'we'?

Again, the author is deeply unserious.

[-] TraumaDumpling@hexbear.net 1 points 5 months ago* (last edited 5 months ago)

Unless the author redefines the words used in the bit that you quoted from them, I addressed their argument just fine.

so you aren't going to read the article then.

No Investigation, No Right to Speak.

Here follows some selections from the article that deal with exactly the issues you focus on.

I strongly advise reading the entire article, and the two it is in response to, and furthermore reading about what a Turing Machine actually is and what it can be used to analyze.

The debate on whether the brain is a computer or not seems to have died down given the recent success of computer science ideas in both neuroscience and machine learning. I have seen a few recent articles on this subject from scientists, who have made strong claims that the brain is in fact literally a computer, and not just a useful metaphor backed up with their reasons to believe so. One such article is this one by Dr. Blake Richards (and here is another one by Dr. Mark Humphries). I will mainly deal with the first one — a really good and extensive article. I would encourage readers to go through it slowly, and in detail for it provides a good look at how to think about what a computer is, and deals well with a lot of the weaker arguments brought against the ‘brain is a computer’ claim (like the ones here.) Dr. Richards addressed a good variety of objections that people might rise to the claim that “the brain is a computer” towards the end of his article. I will raise an argument here that I feel lies at the heart of this discussion, not addressed in the post and is often overlooked or dismissed as non-existent. The reason I think it is important to discuss this question (and/or objection) in detail is that I strongly believe it affects how we study the brain. Describing the brain like a computer allows for a useful computational picture that has been very successful in the fields of neuroscience and artificial intelligence (specifically the sub-area of machine learning over the recent past). However as an engineer interested in building intelligent systems, I think this view of the brain as a computer is beginning to hurt us in our ability to engineer systems that can efficiently emulate their capabilities over a wide range of tasks.

the bolded part above is 'why the author has a problem with the computer metaphor' since you seem so confused by that.

There are a few minor/major problems (depends on how you look at it) in the definitions used to get to the conclusion that the brain is in fact a computer. Using the definitions put forward in the blog post

(1) an algorithm is anything a Turing machine can do, (2) computable functions are defined as those functions that we have algorithms for, (3) a computer is anything which physically implements algorithms in order to solve computable functions.

these are the definitions the author is using, not ones he made up but ones he got from one of the articles he is arguing against. note the similarities with the definitions on https://en.wikipedia.org/wiki/Algorithm :

One informal definition is "a set of rules that precisely defines a sequence of operations",[11][need quotation to verify] which would include all computer programs (including programs that do not perform numeric calculations), and (for example) any prescribed bureaucratic procedure[12] or cook-book recipe.[13] In general, a program is an algorithm only if it stops eventually[14]—even though infinite loops may sometimes prove desirable. Boolos, Jeffrey & 1974, 1999 define an algorithm to be a set of instructions for determining an output, given explicitly, in a form that can be followed by either a computing machine, or a human who could only carry out specific elementary operations on symbols.[15]

note the triviality criticism of the informal definition that this author previously addressed, and the 'human who could only carry out specific elementary operation on symbols' is a reference to Turing Machines and the Chinese Room thought experiment, both of which i recommend reading about.

The concept of algorithm is also used to define the notion of decidability—a notion that is central for explaining how formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not apparently related to the customary physical dimension. From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete (in some sense) and abstract usage of the term.

this is still a matter under academic discussion, there are not widely agreed on definitions of these terms that suit all uses in all fields.

Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an insect looking for food), in an electrical circuit, or in a mechanical device.

algorithms can be implemented by humans, intentionally or not, the hardware is irrelevant to the discussion of Turing machines since they are an idealized abstraction of computing.

enough wikipedia now back to the article

Number (3) is the one we will focus on for it is vitally important. To complete those definitions, I will go ahead an introduce from the same blog post, an intuitive definition of algorithm — “ an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output (e.g. an answer to yes/no integer roots) for a specific input (e.g. a specific polynomial like 6x³yz⁴ + 4y²z + z — 9).” And the more technical definition of algorithm in (1) as “ An algorithm is anything that a Turing machine can do.” This equivalence of course arises since attempts to achieve the intuitive definition about following instructions mechanically can always be reduced to a Turing machine. The author of the post recognizes that under this definition, any physical system can be said to be ‘computing’ it’s time evolution function and the meaning of the word loses it’s importance /significance. In order to avoid that, he subscribes to Wittgenstein and suggests that since when we think about modern day computers, we are thinking about machines like our laptops, desktops, phones which achieve extremely powerful and useful computation, we should hence restrict the word computers to these type of systems (hint: the problem is right here!!). Since our brains also achieve the same, we find that our brains are (uber) computers as well (I might be simplifying/shortening the argument, but I believe I have captured it’s essence and will once again recommend reading the complete article here.) Furthermore, he points out that our modern day computers and brains, have the capability of being Turing complete, but are not of course due to physical constraints on memory, time and energy expenditure. And if we do not have a problem with calling our non-Turing complete, von Neumann architecture machines as computers, then we should not let the physical constraints that prevent the brain from being Turing complete stop us from calling it a computer as well. I agree that we should not restrict ourselves to only referring to Turing complete systems as computers, for that is far too restrictive. The term ‘computer’ does have a popular usage and meaning in everyday life that is independent on whether or not the system is Turing complete. It makes a lot more sense to instead refer to those computers that are in fact Turing complete as ‘Turing complete computers’.

this explains the author's reasoning for their definitions further, he is not making these up, these are the common definitions in use in the discourse.

[-] Tomorrow_Farewell@hexbear.net 2 points 5 months ago* (last edited 5 months ago)

so you aren't going to read the article then.
No Investigation, No Right to Speak.

I have investigated the parts that you have quoted, and that is what I am weighing-in on.. They are self-contained enough for me to weigh-in, unless the author just redefines the words elsewhere, in which case not quoting those parts as well just means that you are deliberately posting misleading quotes.

I strongly advise reading the entire article

From the parts already quoted, it seems that the author is clueless and is willing to make blatantly faulty arguments. The fact that you opted to quote those parts of the article and not the others indicates to me that the rest of the article is not better in this regard.

and furthermore reading about what a Turing Machine actually is and what it can be used to analyze

Firstly, the term 'Turing machine' did not come up in this particular chain of comments up to this point. The author literally never referred to it. Why is it suddenly relevant?
Secondly, what exactly do you think I, as a person with a background in mathematics, am missing in this regard that a person who says 'Boolean logic' is not?

(1) an algorithm is anything a Turing machine can do

This contradicts the previous two definitions the author gave.

(2) computable functions are defined as those functions that we have algorithms for

Whether we know of such an algorithm is actually irrelevant, actually. For a function to be computable, such an algorithm merely has to exist, even if it is undiscovered by anybody. A computable function also has to be N->N.

(3) a computer is anything which physically implements algorithms in order to solve computable functions

That's a deliberately narrow definition of what a computer is, meaning that the author is not actually addressing the topic of the computer analogy in general, but just a subtopic with these assumptions in mind.

To complete those definitions, I will go ahead an introduce from the same blog post, an intuitive definition of algorithm — “ an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output (e.g. an answer to yes/no integer roots) for a specific input (e.g. a specific polynomial like 6x³yz⁴ + 4y²z + z — 9).” And the more technical definition of algorithm in (1) as “

This directly contradicts the author's point (1), where they give a different, non-equivalent definition of what an algorithm is.
So, which is it?

This equivalence of course arises since attempts to achieve the intuitive definition about following instructions mechanically can always be reduced to a Turing machine

This is obvious nonsense. Not only are those definitions not equivalent, the author is also not actually defining what it means for instructions to be followed 'mechanically'.

The author of the post recognizes that under this definition, any physical system can be said to be ‘computing’ it’s time evolution function and the meaning of the word loses it’s importance /significance

Does the author also consider the word 'time' to have a meaning without 'importance'/'significance'?

In order to avoid that, he subscribes to Wittgenstein and suggests that since when we think about modern day computers, we are thinking about machines like our laptops, desktops, phones which achieve extremely powerful and useful computation, we should hence restrict the word computers to these type of systems (hint: the problem is right here!!)

I have already addressed this.

At this point, I am not willing to waste my time on the parts that you have not highlighted. The author is a boy who cried 'wolf!' at this point.

EDIT: you seem to have added a bunch to your previous comment, without clearly pointing out your edits.
I will address one thing.

note the triviality criticism of the informal definition that this author previously addressed, and the 'human who could only carry out specific elementary operation on symbols' is a reference to Turing Machines and the Chinese Room thought experiment, both of which i recommend reading about.

The author seems to be clueless about what a Turing machine is, and the Chinese Room argument is also silly, and can be summarised as either 'but I can't imagine somebody making a computer that, in some inefficient manner, does introspection, even though introspection is a very common thing in software' or 'but what I think we should call "computers" are things that I think do not have qualia, therefore we can't call things with qualia "computers"'. Literally nothing is preventing something that does introspection in some capacity from being a computer.

[-] Frank@hexbear.net 2 points 5 months ago

I've heard people saying that the Chinese Room is nonsense because it's not actually possible, at least for thought experiment purposes, to create a complete set of rules for verbal communication. There's always a lot of ambiguity that needs to be weighed and addressed. The guy in the room would have to be making decisions about interpretation and intent. He'd have to have theory of mind.

[-] Tomorrow_Farewell@hexbear.net 2 points 5 months ago

The Chinese Room argument for any sort of thing that people would commonly call a 'computer' to not be able to have an understanding is either rooted on them just engaging in endless goalpost movement for what it means to 'understand' something (in which case this is obviously silly), or in the fact that they assume that only things with nervous systems can have qualia, and that understanding belongs to qualia (in which case this is something that can be concluded without the Chinese Room argument in the first place).

In any case, Chinese Room is not really relevant to the topic of if considering brains to be computers is somehow erroneous.

[-] Frank@hexbear.net 1 points 5 months ago

In any case, Chinese Room is not really relevant to the topic of if considering brains to be computers is somehow erroneous.

My understanding was that the point of the chinese room was that a deterministic system with a perfect set of rules could produce the illusion of consciousness without ever understanding what it was doing? Is that not analogous to our discussion?

[-] Tomorrow_Farewell@hexbear.net 1 points 5 months ago

At the very least some people are trying to use the Chinese Room thought experiment as an argument against the brain-as-computer analogy/framework.

[-] Frank@hexbear.net 1 points 5 months ago

Is it fair to say we both think the chinese room is a poor thought experiment that doesn't actually do what it claims to do?

[-] Tomorrow_Farewell@hexbear.net 2 points 5 months ago

I suppose so. At least when it comes to the Chinese Room being used as an argument against brain-as-computer analogies/frameworks.

[-] TraumaDumpling@hexbear.net 1 points 5 months ago* (last edited 5 months ago)

I have investigated the parts that you have quoted, and that is what I am weighing-in on.. They are self-contained enough for me to weigh-in, unless the author just redefines the words elsewhere, in which case not quoting those parts as well just means that you are deliberately posting misleading quotes.

and yet you ignore the definitions the author provided

Firstly, the term 'Turing machine' did not come up in this particular chain of comments up to this point. The author literally never referred to it. Why is it suddenly relevant? Secondly, what exactly do you think I, as a person with a background in mathematics, am missing in this regard that a person who says 'Boolean logic' is not?

Turing machines are integral to discussions about computing, algorithms and human consciousness. The author uses the phrase 'turing complete' several times in the article (even in parts i have quoted) and makes numerous subtle references to the ideas, as i would expect from someone familiar with academic discourse on the subject. focusing on a semantic/jargon faux pas does not hide your apparent ignorance of the subject.

This contradicts the previous two definitions the author gave.

there were no previous definition, this is the first definition given in the article. i am not quote-mining in sequence, i am finding the relevant parts so that you may understand what i am saying better. Furthermore, since you seem to miss this fact many times, the author is using the definitions put forward in another article by someone claiming that the brain is a computer and that it is not a metaphor. By refusing to read the entire article you only demonstrate your lack of understanding. Was your response written by an LLM?

Whether we know of such an algorithm is actually irrelevant, actually. For a function to be computable, such an algorithm merely has to exist, even if it is undiscovered by anybody. A computable function also has to be N->N.

'we have' in this case is equivalent to 'exists', you are over-focusing on semantics without addressing the point.

That's a deliberately narrow definition of what a computer is, meaning that the author is not actually addressing the topic of the computer analogy in general, but just a subtopic with these assumptions in mind.

i have no idea what you mean by this, according to wikipedia: "A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation)." which is identical in content to the author's definition.

This directly contradicts the author's point (1), where they give a different, non-equivalent definition of what an algorithm is. So, which is it?

The point that the author is making here is that the definitions are functionally equivalent, one is the result of the implications of the other.

This is obvious nonsense. Not only are those definitions not equivalent, the author is also not actually defining what it means for instructions to be followed 'mechanically'.

'mechanically' just means 'following a set of pre-determined rules', as in a turing machine or chinese room. you would know this if you were familiar with either. There is absolutely no way you have a background in mathematics without knowing this.

Does the author also consider the word 'time' to have a meaning without 'importance'/'significance'?

the author referred to here is not the author of the article i am quoting, but the author of the article it is in response to.

I have already addressed this.

you have not. this is the author of the pro-brain-as-computer article restricting his definitions, that the article i am quoting is arguing against using the same definitions. I am not sure you understood anything in the article, you seem like you do not understand that the author of the article i quote was writing against another article, and using his opponent's own definitions (which i have shown to be valid anyway)

in short you are an illiterate pompous ass, who lies about their credentials and expertise, who is incapable of interpreting any nuance or meaning from text, chasing surface level ghost interpretations and presenting it as a Gotcha. I am done with this conversation.

[-] Tomorrow_Farewell@hexbear.net 2 points 5 months ago* (last edited 5 months ago)

and yet you ignore the definitions the author provided

Which definitions am I ignoring? I have quite literally addressed the parts where the author gives definitions.
The author is really bad at actually providing definitions. They give three different ones for what an 'algorithm' is, but can't give a single one to what the expression 'mechanically following instructions' means.

Turing machines are integral to discussions about computing, algorithms and human consciousness

They are irrelevant to the parts that you quoted prior to bringing up Turing machines.

The author uses the phrase 'turing complete' several times in the article

Not in any part that you quoted up to that point.

even in parts i have quoted

I looked for those with ctrl+f. There are no mention of Turing machines and of Turing completeness up to the relevant point.

and makes numerous subtle references to the ideas

Expecting the reader of the article to be a mind reader is kind of wild.
In any case, the author is not making any references to Turing machines and Turing completeness in the parts you quoted up to the relevant point.
Also, the author seems to not actually use the term 'Turing machine' to prove any sort of point in the parts that you quoted and highlighted.

focusing on a semantic/jargon faux pas does not hide your apparent ignorance of the subject

I bring up a bunch of issues with what the author says. Pretending that my only issue is the author fumbling their use of terminology once just indicates that, contrary to your claims, my criticism is not addressed.

there were no previous definition

This is a lie. Here's a definition that is given in the parts that you quoted previously:

(2) an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input

I'm going to note that this is not the first time I'm catching you being dishonest here.

Furthermore, since you seem to miss this fact many times, the author is using the definitions put forward in another article by someone claiming that the brain is a computer

Okay, I went and found the articles that they are talking about (hyperlinked text is not easily distinguishable by me on that site). Turns out, the author of the article that you are defending is deliberately misunderstanding that other article. Specifically, this part is bad:

In order to avoid that, he subscribes to Wittgenstein and suggests that since when we think about modern day computers, we are thinking about machines like our laptops, desktops, phones which achieve extremely powerful and useful computation, we should hence restrict the word computers to these type of systems (hint: the problem is right here!!)

Here's a relevant quote from the original article:

As such, these machines that are now ubiquitous in our lives are a much more powerful form of computer than a stone or a snowflake, which are limited to computing only the functions of physics that apply to their movement

Also, I'd argue that the relevant definitions in the original article might be/are bad.

Onto the rest of your reply.

and that it is not a metaphor

So far, I don't see any good arguments against that put forth by the author you are defending.

By refusing to read the entire article you only demonstrate your lack of understanding

I came here initially to address a particular argument. Unless the author redefines the relevant words elsewhere, the rest of the article is irrelevant to my criticism of that argument.

Was your response written by an LLM?

Cute.

'we have' in this case is equivalent to 'exists'

I do not trust the author to not blunder that part, especially considering that they are forgetting that computable functions have to be N->N.

i have no idea what you mean by this, according to wikipedia: "A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation)." which is identical in content to the author's definition

'The English Wikipedia gives this "definition", so it must be the only definition and/or understanding in this relevant context' is not a good argument'.
I'm going to admit that I did make a blunder regarding my criticism of their point (3), at least internally. We can consider myself wrong on that point. In any case, sure, let's go with the definition that the author uses. Have they provided any sort of argument against it? Because so far, I haven't seen any sort of good basis for their position.

The point that the author is making here is that the definitions are functionally equivalent, one is the result of the implications of the other

They are not equivalent. If something is an algorithm by one of those 'definitions' (both of them are not good), then it might not be an algorithm by the other definition.
The author is just plain wrong there.

'mechanically' just means 'following a set of pre-determined rules'

Care to cite where the author says that? Or is this your own conjecture?
In any case, please, tell me how your brain can operate in contradiction to the laws of physics. I'll wait to see how a brain can work without following 'a set of pre-determined rules'.

as in a turing machine or chinese room

Or in any kind of other system, judging by the 'definition'.

you would know this if you were familiar with either

Cute.

you have not. this is the author of the pro-brain-as-computer article restricting his definitions

You mean this part?

As I argued above, I think it’s reasonable to restrict their usage to machines, like the brain, that not only solve the functions of physics, but a much larger array of computable functions, potentially even all of them (assuming the space of possible brains is Turing complete)

Or the part where, again, the same author literally calls stones and snowflakes 'computers' (which I am going to back as a reasonable use of the word)?

I am not sure you understood anything in the article

I was addressing particular arguments. Again, unless the author redefines the words elsewhere in the article, the rest of the article has no bearing on my criticism.

in short you are an illiterate pompous ass incapable of interpreting any nuance or meaning from text

Cool. Now, please, tell me how my initial claim, 'this is a rather silly argument' is bad, and how the rest of the article is relevant. Enlighten me, in what way is me saying that the particular argument that you quoted, and for which you have failed to provide any sort of context that is significant to my criticism making me 'illiterate'?

In case you still don't understand, 'read the entire rest of the article' is not a good refutation of the claim 'this particular argument is bad' when the rest of the article does not actually redefine any of the relevant words (in a way that is not self-contradictory).

In return, I can conclude that you are very defensive of the notion that brains somehow don't operate by the laws of physics, and it's all just magic, and can't actually deal with criticism of the arguments for your position.

this post was submitted on 18 May 2024
77 points (100.0% liked)

chapotraphouse

13504 readers
1292 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Vaush posts go in the_dunk_tank

Dunk posts in general go in the_dunk_tank, not here

Don't post low-hanging fruit here after it gets removed from the_dunk_tank

founded 3 years ago
MODERATORS