this post was submitted on 30 May 2025
131 points (94.6% liked)

Lemmy Shitpost

31841 readers
4510 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] yeahiknow3@lemmings.world 3 points 1 day ago* (last edited 1 day ago) (2 children)

Reasoning literally requires consciousness because it’s a fundamentally normative process. What computers do isn’t reasoning. It’s following instructions.

[–] postmateDumbass@lemmy.world -1 points 1 day ago (1 children)

Reasoning is approximated enough with matrix math and filter algorithms.

It can fly drones, dodge wrenches.

The AGI that escapes wont be the ideal philosopher king, it will be the sociopathic teenage rebel.

[–] yeahiknow3@lemmings.world 2 points 1 day ago* (last edited 1 day ago) (1 children)

Okay, we can create the illusion of thought by executing complicated instructions. But there’s still a difference between a machine that does what it’s told and one that thinks for itself. The fact that it might be crazy is irrelevant, since we don’t know how to make it, at all, crazy or not.

[–] communist@lemmy.frozeninferno.xyz 0 points 1 day ago (1 children)

Being able to decide its own goals is a completely unimportant aspect of the problem.

why do you care?

[–] yeahiknow3@lemmings.world 3 points 1 day ago* (last edited 21 hours ago) (1 children)

The discussion is over whether we can create an AGI. An AGI is an inorganic mind of some sort. We don’t need to make an AGI. I personally don’t care. The question was can we? The answer is No.

[–] communist@lemmy.frozeninferno.xyz 1 points 1 day ago (1 children)

You've arbitrarily defined an agi by its consciousness instead of its capabilities.

[–] yeahiknow3@lemmings.world 1 points 21 hours ago* (last edited 21 hours ago) (1 children)

Your definition of AGI as doing “jobs” is arbitrary, since the concept of “a job” is made up; literally anything can count as economic labor.

For instance, people frequently discuss AGI replacing governments. That would require the capacity for leadership. It would require independence of thought and creative deliberation. We simply cannot list (let alone program) all human goals and values. It is logically impossible to axiomatize our value systems. The values would need to be intuited. This is a very famous result in mathematics called Gödel's first incompleteness theorem.

To quote Gödel himself: “We cannot mechanize all of our intuitions.”

Alan Turing drew the same conclusion a few years later with The Halting Problem.

In other words, if we want to build a machine that shares our value system, we will need to do so in such a way that it can figure out our values for itself. How? Well, presumably by being conscious. I would be happy if we could do so without its being conscious, but that’s my point: nobody knows how. Nobody even knows where to begin to guess how. That’s why AGI is so problematic.

[–] communist@lemmy.frozeninferno.xyz 1 points 11 hours ago (1 children)

Jobs are not arbitrary, they're tasks humans want another human to accomplish, an agi could accomplish all of those that a human can.

For instance, people frequently discuss AGI replacing governments. That would require the capacity for leadership. It would require independence of thought and creative deliberation. We simply cannot list (let alone program) all human goals and values. It is logically impossible to axiomatize our value systems. The values would need to be intuited. This is a very famous result in mathematics called Gödel's first incompleteness theorem

Why do you assume we have to? Even a shitty current ai can do a decent job at this if you fact check it, better than a lot of modern politicians. Feed it the entire internet and let it figure out what humans value, why would we manually do this?

In other words, if we want to build a machine that shares our value system, we will need to do so in such a way that it can figure out our values for itself. How? Well, presumably by being conscious. I would be happy if we could do so without its being conscious, but that’s my point: nobody knows how. Nobody even knows where to begin to guess how. That’s why AGI is so problematic.

humans are conscious and have gotten no closer to doing this, ever, I see no reason to believe consciousness will help at all with this matter.

[–] yeahiknow3@lemmings.world 1 points 7 hours ago* (last edited 6 hours ago) (1 children)

Feed it the entire internet and let it figure out what humans value

There are theorems in mathematical logic that tell us this is literally impossible. Also common sense.

And LLMs are notoriously stupid. Why would you offer them as an example?

I keep coming back to this: what we were discussing in this thread is the creation of an actual mind, not a zombie illusion. You’re welcome to make your half-assed malfunctional zombie LLM machine to do menial or tedious uncreative statistical tasks. I’m not against it. That’s just not what interests me.

Sooner or later humans will create real artificial minds. Right now, though, we don’t know how to do that. Oh well.

https://introtcs.org/public/index.html

[–] communist@lemmy.frozeninferno.xyz 1 points 7 hours ago (1 children)

That's just because there are no consistent set of axioms for human intuition. Obviously the best you can do is approximate, and I see no reason you can't approximate this, feel free to give me proof to the contrary but all you've done so far is appeal to authority and not explain your arguments.

[–] yeahiknow3@lemmings.world 1 points 6 hours ago* (last edited 6 hours ago) (1 children)

Why do you talk about shit you don’t understand with such utter confidence? Being a fucking moron has to be the chillest way to go through the world.

[–] communist@lemmy.frozeninferno.xyz 1 points 6 hours ago (1 children)

You don't understand the claims you're making if you can't explain them. Try again this time actually explaining yourself rather than just going "some guy said I'm right", you keep doing that without engaging with the discussion, and you keep assuming the guy verified your claim when they actually verified an irrelevant one.

[–] yeahiknow3@lemmings.world 1 points 6 hours ago* (last edited 6 hours ago) (1 children)

My explanations were succinct and simple. If they’re still over your head, sadly I lack the talent to simplify the science and math any further.

Maybe try reading a book?

I have, I simply disagree with your conclusions.

[–] communist@lemmy.frozeninferno.xyz -2 points 1 day ago* (last edited 1 day ago) (1 children)

A philosophical zombie still gets its work done, I fundamentally disagree that this distinction is economically meaningful. A simulation of reasoning isn't meaningfully different.

[–] yeahiknow3@lemmings.world 3 points 1 day ago* (last edited 1 day ago) (1 children)

That’s fine, but most people (engaged in this discussion) aren’t interested in an illusion. When they say AGI, they mean an actual mind capable of rationality (which requires sensitivity and responsiveness to reasons).

Calculators, LLMs, and toasters can’t think or understand or reason by definition, because they can only do what they’re told. An AGI would be a construct that can think for itself. Like a human mind, but maybe more powerful. That requires subjective understanding (intuitions) that cannot be programmed. For more details on why, see Gödel's incompleteness theorems. We can’t even axiomatize mathematics, let alone human intuitions about the world at large. Even if it’s possible we simply don’t know how.

[–] communist@lemmy.frozeninferno.xyz 0 points 1 day ago* (last edited 1 day ago) (1 children)

If it quacks like a duck it changes the entire global economy and can potentially destroy humanity. All while you go "ah but it's not really reasoning."

what difference does it make if it can do the same intellectual labor as a human? If I tell it to cure cancer and it does will you then say "but who would want yet another machine that just does what we say?"

your point reads like complete psuedointellectual nonsense to me. How is that economically valuable? Why are you asserting most people care about that and not the part where it cures a disease when we ask it to?

[–] yeahiknow3@lemmings.world 2 points 1 day ago* (last edited 1 day ago) (1 children)

A malfunctioning nuke can also destroy humanity. So could a toaster, under the right circumstances.

The question is not whether we can create a machine that can destroy humanity. (Yes.) Or cure cancer. (Maybe.) The question is whether we can create a machine that can think. (No.)

What I was discussing earlier in this thread was whether we (scientists) can build an AGI. Not whether we can create something that looks like an AGI, or whether there’s an economic incentive to do so. None of that has any bearing.

In English, the phrase “what most people mean when they say” idiomatically translates to something like “what I and others engaged in this specific discussion mean when we say.” It’s not a claim about how the general population would respond to a poll.

Hope that helps!

[–] communist@lemmy.frozeninferno.xyz 0 points 1 day ago* (last edited 1 day ago) (1 children)

If there's no way to tell the illusion from reality, tell me why it matters functionally at all.

what difference does true thought make from the illusion?

also agi means something that can do all economically important labor, it has nothing to do with what you said and that's not a common definition.

[–] yeahiknow3@lemmings.world 2 points 1 day ago* (last edited 1 day ago) (1 children)

Matter to whom?

We are discussing whether creating an AGI is possible, not whether humans can tell the difference (which is a separate question).

Most people can’t identify a correct mathematical equation from an incorrect one, especially when the solution is irrelevant to their lives. Does that mean that doing mathematics correctly “doesn’t matter?” It would be weird to enter a mathematical forum and ask “Why does it matter?”

Whether we can build an AGI is just a curious question, whose answer for now is No.

P.S. defining AGI in economic terms is like defining CPU in economic terms: pointless. What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.

That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.

[–] communist@lemmy.frozeninferno.xyz 1 points 1 day ago* (last edited 1 day ago) (2 children)

Most people can’t identify a correct mathematical equation from an incorrect one

this is irrelevant, we're talking about something where nobody can tell the difference, not where it's difficult.

What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.

it means a job. That's obviously not a job and obviously not what is meant, an interesting strategy from one who just used "what most people mean when they say"

That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.

it just has to be at least as good as a human at manipulating the world to achieve its goals, I don't know of any other definition of agi that factors in actually meaningful tasks

an agi should be able to do almost any task a human can do at a computer. It doesn't have to be conscious and I have no idea why or where consciousness factors into the equation.

[–] yeahiknow3@lemmings.world 1 points 21 hours ago (1 children)

we're talking about something where nobody can tell the difference, not where it's difficult.

You’re missing the point. The existence of black holes was predicted long before anyone had any idea how to identify them. For many years, it was impossible. Does that mean black holes don’t matter? That we shouldn’t have contemplated their existence?

Seriously though, I’m out.

[–] communist@lemmy.frozeninferno.xyz 1 points 11 hours ago* (last edited 11 hours ago) (1 children)

The existence of black holes has a functional purpose in physics, the existence of consciousness only has one to our subjective experience, and not one to our capabilities.

if I'm wrong list a task that a conscious being can do that an unconscious one is unable to accomplish.

[–] yeahiknow3@lemmings.world 1 points 7 hours ago* (last edited 7 hours ago) (1 children)

if I'm wrong list a task that a conscious being can do that an unconscious one is unable to accomplish.

These have been listed repeatedly: love, think, understand, contemplate, discover, aspire, lead, philosophize, etc.

There are, in fact, very few interesting or important things that a non-thinking entity can do. It can make toast. It can do calculations. It can design highways. It can cure cancer. It can probably fold clothes. None of this shit is particularly exciting. Just more machines doing what they’re told. We want a machine that can tell us what to do, instead. That’s AGI. We don’t know how to build such a machine, at least given our current understanding of mathematical logic, theoretical computer science, and human cognition.

[–] communist@lemmy.frozeninferno.xyz 1 points 6 hours ago* (last edited 6 hours ago)

These have been listed repeatedly: love, think, understand, contemplate, discover, aspire, lead, philosophize, etc.

these are not tasks except maybe philosophize and discover, which even current models can do... heck google is using old shitty ones to do it...

https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/

I said a task, not a feeling, a task is a manipulation of the world to achieve a goal, not something vague and undefinable like love.

We want a machine that can tell us what to do, instead.

theres no such thing, there's no objective right answer to this in the first place, it's not like a conscious being we know of can do this, why would a conscious machine be able to? This is just you asking the impossible, consciousness would not help even the tiniest bit with this problem. you have to say "what to do to achieve x" for it to have meaning, which these machines could do without solving the hard problem of consciousness at all.

yet again you fail to name one valuable aspect of solving consciousness. You keep saying we need the hard problem of consciousness solved for agi but can't name even one way in which it provides a functional improvement to anything.

[–] yeahiknow3@lemmings.world 1 points 21 hours ago* (last edited 20 hours ago) (1 children)

Economics is descriptive, not prescriptive. The whole concept of “a job” is made up and arbitrary.

You say an AGI would need to do everything a human can. Great, here are some things that humans do: love, think, contemplate, reflect, regret, aspire, etc. these require consciousness.

Also, as you conveniently ignored, philosophy, politics, science are among the most important non-family-oriented “jobs” we humans do. They require consciousness.

Plus, if a machine does what it’s told, then someone would be telling it what to do. That’s a job that a machine cannot do. But most of our jobs are already about telling machines what to do. If an AGI is not self-directed, it can’t tell other machines what to do, unless it is itself told what to do. But then someone is telling it what to do, which is “a job.”

[–] communist@lemmy.frozeninferno.xyz 1 points 11 hours ago (1 children)

A job is a task one human wants another to accomplish, it is not arbitrary at all.

philosophy, politics, science are among the most important non-family-oriented “jobs” we humans do. They require consciousness.

i don't see why they do, a philosophical zombie could do it, why not an unconscious AI? alphaevolve is already making new science, I see no reason an unconscious being with the abilty to manipulate the world and verify couldn't do these things.

Plus, if a machine does what it’s told, then someone would be telling it what to do. That’s a job that a machine cannot do. But most of our jobs are already about telling machines what to do. If an AGI is not self-directed, it can’t tell other machines what to do, unless it is itself told what to do. But then someone is telling it what to do, which is “a job.”

yes but you can give it large, vague goals like "empower humanity, do what we say and minimize harm." And it will still do them. So what does it matter?

[–] yeahiknow3@lemmings.world 1 points 7 hours ago* (last edited 7 hours ago) (1 children)

Why do you expect an unthinking, non-deliberative zombie process to know what you mean by “empower humanity”? There are facts about what is GOOD and what is BAD that can only be grasped through subjective experience.

When you tell it to reduce harm, how do you know it won’t undertake a course of eugenics? How do you know it won’t see fit that people like you, by virtue of your stupidity, are culled or sterilized?

[–] communist@lemmy.frozeninferno.xyz 1 points 6 hours ago (1 children)

Why do you expect an unthinking, non-deliberative zombie process to know what you mean by “empower humanity”? There are facts about what is GOOD and what is BAD that can only be grasped through subjective experience.

these cannot be grasped by subjective experience, and I would say nothing can possibly achieve this, not any human at all, the best we can do is poll humanity and go by approximates, which I believe is best handled by something automatic. humans can't answer these questions in the first place, why should I trust something without subjective experience to do it any worse?

When you tell it to reduce harm, how do you know it won’t undertake a course of eugenics?

because this is unpopular, there are many things online saying not to... do you think humans are immune to this? When has consciousness ever prevented such an outcome?

How do you know it won’t see fit that people like you, by virtue of your stupidity, are culled or sterilized?

we don't, but we also don't with conscious beings, so there's still no stated advantage to consciousness.

[–] yeahiknow3@lemmings.world 1 points 6 hours ago* (last edited 6 hours ago) (1 children)

Oh my god. So the machine won’t do terrible immoral things because they are unpopular on the internet. Well ladies and gentlemen, I rest my case.

[–] communist@lemmy.frozeninferno.xyz 1 points 6 hours ago* (last edited 6 hours ago)

No, the machine will and so would a conscious one. you misunderstand. This isn't an area where a conscious machine wins.

Tell me, if consciousness prevents this, why did humans do it?