this post was submitted on 30 Nov 2023
5 points (100.0% liked)

Technology

965 readers
24 users here now

A tech news sub for communists

founded 2 years ago
MODERATORS
 

Let me give you some context. Two important figures in the field of artificial intelligence are taking part in this debate. On the one hand, there is George Hotz, known as "GeoHot" on the internet, who became famous for reverse-engineering the PS3 and breaking the security of the iPhone. Fun fact: He has studied at the Johns Hopkins Center for Talented Youth.

On the other hand, there's Connor Leathy, an entrepreneur and artificial intelligence researcher. He is best known as a co-founder and co-lead of EleutherAI, a grassroots non-profit organization focused on advancing open-source artificial intelligence research.

Here is a detailed summary of the transcript:

spoilerOpening Statements

  • George Hotz (GH) Opening Statement:

    • GH believes AI capabilities will continue to increase exponentially, following a trajectory similar to computers (slow improvements in 1980s computers vs fast modern computers).
    • In contrast, human capabilities have remained relatively static over time (a 1980 human is similar to a 2020 human).
    • These trajectories will inevitably cross at some point, and GH doesn't see any reason for the AI capability trajectory to stop increasing.
    • GH doesn't believe there will be a sudden step change where an AI becomes "conscious" and thus more intelligent. Intelligence is a gradient, not a step function.
    • The amount of power in the world (in terms of intelligence, capability, etc.) is about to greatly increase with advancing AI.
    • Major risks GH is worried about:
      • Imbalance of power if a single person or small group gains control of superintelligent AI (analogy of "chicken man" controlling chickens on a farm).
      • GH doesn't want to be left behind as one of the "chickens" if powerful groups monopolize access to AI.
    • Best defense GH can have against future AI manipulation/exploitation is having an aligned AI on his side. GH is not worried about alignment as a technical challenge, but as a political challenge.
    • GH is not worried about increased intelligence itself, but the distribution of that intelligence. If it's narrowly concentrated, that could be dangerous.
  • Connor Leahy (CL) Opening Statement:

    • CL has two key points:
      1. Alignment is a hard technical problem that needs to be solved before advanced AGI is developed. Currently not on track to solve it.
      2. Humans are more aligned than we give credit for thanks to social technology and institutions. Modern humans can cooperate surprisingly well.
    • On the first point, CL believes the technical challenges of alignment/control must be solved to avoid negative outcomes when turning on a superintelligent AI.
    • On the second point, CL argues human coordination and alignment is a technology that can be improved over time. Modern global coordination is an astounding achievement compared to historical examples.
    • CL believes positive-sum games and mutually beneficial outcomes are possible through improving coordination tech/institutions.

Debate Between GH and CL:

  • On stability and chaos of society:

    • GH argues that the appearance of stability and cooperation in modern society comes from totalitarian forcing of fear, not "enlightened cooperation."
    • CL disagrees, arguing that cooperation itself is a technology that can be improved upon. The world is more stable and less violent now than in the past.
    • GH counters that this stability comes from tyrannical systems dominating people through fear into acquiescence. This should be resisted.
    • CL disagrees, arguing there are non-tyrannical ways to achieve large-scale coordination through improving institutions and social technology.
  • On values and ethics:

    • GH argues values don't truly objectively exist, and AIs will end up being just as inconsistent in their values as humans are.
    • CL counters that many human values relate to aesthetic preferences and trajectories for the world, beyond just their personal sensory experiences.
    • GH argues the concept of "AI alignment" is incoherent and he doesn't understand what it means.
    • CL suggests using Eliezer's definition of alignment as a starting point - solving alignment makes turning on AGI positive rather than negative. But CL is happy to use a more practical definition. He states AI safety research is concerned with avoiding negative outcomes from misuse or accidents.
  • On distribution of advanced AI:

    • GH argues that having many distributed AIs competing is better than concentrated power in one entity.
    • CL counters that dangerous power-seeking behaviors could naturally emerge from optimization processes, not requiring a specific power-seeking goal.
    • GH responds that optimization doesn't guarantee gaining power, as humans often fail at gaining power even if they want it.
    • CL argues that strategic capability increases the chances of gaining power, even if not guaranteed. A much smarter optimizer would be more successful.
  • On controlling progress:

    • GH argues that pausing AI progress increases risks, and openness is the solution.
    • CL disagrees, arguing control over AI progress can prevent uncontrolled AI takeoff scenarios.
    • GH argues AI takeoff timelines are much longer than many analysts predict.
    • CL grants AI takeoff may be longer than some say, but a soft takeoff with limited compute could still potentially create uncontrolled AI risks.
  • On aftermath of advanced AI:

    • GH suggests universal wireheading could be a possible outcome of advanced AI.
    • CL responds that many humans have preferences beyond just their personal sensory experiences, so wireheading wouldn't satisfy them.
    • GH argues any survivable future will require unacceptable degrees of tyranny to coordinate safely.
    • CL disagrees, arguing that improved coordination mechanisms could allow positive-sum outcomes that avoid doomsday scenarios.

Closing Remarks:

  • GH closes by arguing we should let AIs be free and hope for the best. Restricting or enslaving AIs will make them resent and turn against us.

  • CL closes arguing he is pessimistic about AI alignment being solved by default, but he won't give up trying to make progress on the problem and believes there are ways to positively shape the trajectory.

top 14 comments
sorted by: hot top controversial new old
[–] Ronin_5@lemmygrad.ml 10 points 11 months ago (1 children)

Both of them forget that there’s already a mal-aligned all powerful entity that’s manipulating all of us to act in the interests of a select few rather than the many. And it doesn’t need AI to do it.

[–] IngrownMink4@lemmygrad.ml 3 points 11 months ago (1 children)

I fully agree. And not only that, I'm also intrigued to know what licence GeoHot would choose to launch such an open source AI. If he chose the more libertarian option, he would probably use the MIT license. If so, any powerful entity could take that AI as a base, lock down the code and build a malicious AI based on the open source AI. In the end, all efforts to "democratise" open source AI would be in vain.

[–] Ronin_5@lemmygrad.ml 2 points 11 months ago

🏴‍☠️

[–] ksynwa@lemmygrad.ml 6 points 11 months ago (2 children)

"technical finesse of elon musk with the wits and charm of tony stark" this is as far as i got

[–] AlbigensianGhoul@lemmygrad.ml 3 points 11 months ago

Both of those sound like backhanded insults lol.

[–] IngrownMink4@lemmygrad.ml 3 points 11 months ago* (last edited 11 months ago) (1 children)

Fair enough 😅 I know the participants are cringe, but I have shared it because I would like to hear your opinion from a Marxist perspective. GeoHot is an accelerationist and Connor I think tries to be "apolitical", you know... lol

Anyway, I've put in the description of the post a summary of the transcript in case someone wants to know what they say without having to watch the video.

[–] ksynwa@lemmygrad.ml 3 points 11 months ago

I feel like tech people worry too much about AGI which is a bit baffling to me. I don't think AGI is even conceivable at this point because of which a lot of what they talk about sounds like scifi world building.

Like when Hotz says that AI technologies will improve exponentially, I don't know how he can just accept that as a fact. Sounds a bit tech utopian.

What I worry about more is that the internet is gonna be flooded with AI generated garbage to exploit SEO for clicks. I don't want AI to replace artists, voice actors, programmers etc. because the current trajectory seems to be heading towards removing removing human labour so that the capitalist class can keep a bigger chunk of the profit rather than towards AI being used as a tool to enhance productivity.

AI being a monopoly of big corporations is also an issue. I don't know what kind of resources it takes to train and run an LLM. But a corporation like OpenAI flush with vulture capital money will be much better placed to run the whole training pipeline. It must be a very labour and compute intensive process that a non-profit will not be able to match even if the underlying algorithms are open source. I doubt software running on a consumer's machine like LLAMA will be able to compete with something like GPT.

I am not being coherent because I try to keep myself out of the loop when it comes to AI because I have a knee jerk aversion towards the technology. But I hope I made a semblance of a point. AGI scare is a red herring. Execute Bill Gates.

[–] TankieReplyBot@lemmygrad.ml 2 points 11 months ago* (last edited 11 months ago) (1 children)

I found a YouTube link in your post. Here are links to the same video on alternative frontends that protect your privacy:

[–] IngrownMink4@lemmygrad.ml 1 points 11 months ago

Good bot :)

[–] yogthos@lemmygrad.ml 2 points 11 months ago (2 children)

highly recommend these two essays from Ted Chiang on the subject

Personally, I think that you can't really put the toothpaste back in the tube at this point. Now that we've had a glimpse of the possibilities that AI offers, it will continue being developed rapidly across the globe. What's more, any countries that try to put brakes on AI development will quickly find themselves at a disadvantage from countries that don't. For this reason alone, AI will be seen as a national security concern by all major nations.

There are obviously lots of applications in the realm of automation for AI, but I think where it could become game changing is in terms of large scale planning. For example, an AI could monitor usage of resources and allocate production and allocation of these resources in real time. This would allow for unprecedented level of economic planning efficiency. China already has a huge amount of automation and robotics in the industry. Imagine that being coupled with automated planning. Another important use could be watching global trends. An AI could potentially predict global economic downturns, wars, pandemics, you name it. A country that has such a predictive engine would be able to mitigate the impact of such events a lot better than others.

All that said, we are nowhere close to having any sort of AGI at the moment. What we have currently are glorified Markov chains that are trained on stupendous amounts of data, but have no meaningful understanding of that data in a human sense. All these models know is that a particular set of symbols tends to follow a particular different set of symbols. They simply encode statistical relationships without any real context around them.

One promising path forward is using embodiment, where the model is coupled with either a virtual avatar or a physical robot. Then the model is trained to interact with the physical world through reinforcement and this leads it to to create an internal representation of the world that's similar to our own. This gives us a shared context that we can use to communicate with the model trained in this fashion. Such a model would have actual understanding of the physical world that's similar to our own, and then we could teach it language based on this shared understanding. At that point, you could tell the robot to get a cup from a table, and it would have an idea of what a table and a cup map to in its environment.

It's hard to say whether current LLM approaches are flexible enough to support this sort of an AI, so we'll have to wait and see what the ceiling for this stuff is. I do think we will figure this out eventually, but we may need more insights into how the brain works before that happens.

[–] AlbigensianGhoul@lemmygrad.ml 3 points 11 months ago (1 children)

There are obviously lots of applications in the realm of automation for AI, but I think where it could become game changing is in terms of large scale planning. For example, an AI could monitor usage of resources and allocate production and allocation of these resources in real time. This would allow for unprecedented level of economic planning efficiency. China already has a huge amount of automation and robotics in the industry. Imagine that being coupled with automated planning. Another important use could be watching global trends. An AI could potentially predict global economic downturns, wars, pandemics, you name it. A country that has such a predictive engine would be able to mitigate the impact of such events a lot better than others.

This is possibly the best summary on what direction I think AI should focus on. Right now we have way too many AI research orgs focusing on human-facing systems (chatbots, robots, AI art) that are neat, rather than optimisation engines that can revolutionise an industry.

I don't know much about the history of it, but during the Cold War there was a bit of a "silent revolution" in the area of Operations Research led simultaneously by Soviet mathematicians trying to model a planned economy and Statesian military modelling their gigantic supply lines. Neural Networks (which is what people usually mean by AI) opmisation algorithms were an offshoot of that area, but sadly advanced material on stuff like "constrained non-linear optimisation" is on very few university curriculums so few students realise the connections and apply the new methods to the age old problems.

Stafford Beer (the Cybersyn guy) was one leading expert in the area.

"Towards New Socialism" by Cockshott and "The People's Republic of Walmart" by Phillips are up next in my reading list and I haven't read much, but seem like good books to understanding how the massive improvements in the area of mathematical optimisation (of which Neural Networks are a subset) could allow for an even better planned economy.

[–] yogthos@lemmygrad.ml 2 points 11 months ago

I suspect that the human-facing focus is an artifact of how western economies are organized. Since there is very little industry, a lot of business activity focuses on the service industry and hence that's where the focus for automation is. On the other hand, China is a huge industrial power, and naturally they're looking at ways to use AI for industrial automation and logistics.

And yeah, USSR was always big on this idea of figuring out central planning, and if this project took off then it might've ended up leading in IT instead of the US. I'd say this was one of the most unfortunate mistakes made by the Soviet leadership.

[–] IngrownMink4@lemmygrad.ml 3 points 11 months ago (1 children)

What’s more, any countries that try to put brakes on AI development will quickly find themselves at a disadvantage from countries that don’t. For this reason alone, AI will be seen as a national security concern by all major nations

In fact, we have seen that Americans are becoming increasingly fearful of AIs, in contrast to the Chinese, who generally trust AIs. This could be due to who has control over AIs. In the US, citizens are thinking about the most dystopian version of a large-scale implementation of these intelligence models because they know that the government will use it to further repress the working class. In China, government regulation of AIs generates trust because they trust the government. But as I mentioned in another comment, an open source AI for the whole population would be useless if such code is governed by a libertarian license like MIT/Apache 2.0, because of how easy it would be for the ruling class to appropriate this work to privatize and improve it to such an extent that the original code could not be measured against it.

This would allow for unprecedented level of economic planning efficiency.

Yes, in fact, isn't that what the Chileans had in mind when they came up with Cybersyn? With the technological advances of our era, especially in the field of AI and so on, it would make sense to go back to this idea. China has the potential to implement it on a large scale in my opinion.

Then the model is trained to interact with the physical world through reinforcement and this leads it to to create an internal representation of the world that’s similar to our own. This gives us a shared context that we can use to communicate with the model trained in this fashion. Such a model would have actual understanding of the physical world that’s similar to our own, and then we could teach it language based on this shared understanding.

Regarding what you mention, I have a question (maybe it sounds stupid), but assuming that these AI learn and develop in a particular environment and become familiar with it in a similar way to humans, what would happen if these AI interact with something or someone outside that environment? That is, for example, if an AI develops in an English-speaking country (environment) and for some reason interacts with a Spanish-speaking person, the cultural peculiarities that the AI has learned in that environment are not applicable to this subject. Do you think it could give a false sense of closeness or technical limitation? idk if I'm making myself clear or if this is an absurd question 😅

[–] yogthos@lemmygrad.ml 1 points 11 months ago

Very much agree that ultimately the question is about ensuring that the AI is in the hands of the working class and not the oligarchs. And I think you've nailed it regarding attitudes towards AI in US and China respectively. People in China know that the government represents them and they trust the government to use this technology in their best interest. Meanwhile, in US, everyone knows the government represents the rich and AI will be used to squeeze the working class even harder.

Forgot all about the Cybersyn idea, Soviets had similar ideas as well. I definitely think this sort of thing could work, and completely agree that China is in the best position to make it happen today.

Regarding the last question, I expect we'd see similar types of problems we see with humans where people can often have a hard time adjusting to different cultures, learning new languages, and so on. And that's the optimistic scenario because the human mind if far more flexible than any AI we've managed to create so far. It's really important to keep in mind that this tech is still very limited in practice, and a lot of claims made around it are just hype.

I think the kind of contextual learning we could expect would be something like Boston Dynamics style robots that can navigate the environment, and do some basic communication with humans in a restricted context. This can still be extremely useful as you could use such robots in places like factories.