Let me give you some context. Two important figures in the field of artificial intelligence are taking part in this debate. On the one hand, there is George Hotz, known as "GeoHot" on the internet, who became famous for reverse-engineering the PS3 and breaking the security of the iPhone. Fun fact: He has studied at the Johns Hopkins Center for Talented Youth.
On the other hand, there's Connor Leathy, an entrepreneur and artificial intelligence researcher. He is best known as a co-founder and co-lead of EleutherAI, a grassroots non-profit organization focused on advancing open-source artificial intelligence research.
Here is a detailed summary of the transcript:
spoiler
Opening Statements
Debate Between GH and CL:
-
On stability and chaos of society:
- GH argues that the appearance of stability and cooperation in modern society comes from totalitarian forcing of fear, not "enlightened cooperation."
- CL disagrees, arguing that cooperation itself is a technology that can be improved upon. The world is more stable and less violent now than in the past.
- GH counters that this stability comes from tyrannical systems dominating people through fear into acquiescence. This should be resisted.
- CL disagrees, arguing there are non-tyrannical ways to achieve large-scale coordination through improving institutions and social technology.
-
On values and ethics:
- GH argues values don't truly objectively exist, and AIs will end up being just as inconsistent in their values as humans are.
- CL counters that many human values relate to aesthetic preferences and trajectories for the world, beyond just their personal sensory experiences.
- GH argues the concept of "AI alignment" is incoherent and he doesn't understand what it means.
- CL suggests using Eliezer's definition of alignment as a starting point - solving alignment makes turning on AGI positive rather than negative. But CL is happy to use a more practical definition. He states AI safety research is concerned with avoiding negative outcomes from misuse or accidents.
-
On distribution of advanced AI:
- GH argues that having many distributed AIs competing is better than concentrated power in one entity.
- CL counters that dangerous power-seeking behaviors could naturally emerge from optimization processes, not requiring a specific power-seeking goal.
- GH responds that optimization doesn't guarantee gaining power, as humans often fail at gaining power even if they want it.
- CL argues that strategic capability increases the chances of gaining power, even if not guaranteed. A much smarter optimizer would be more successful.
-
On controlling progress:
- GH argues that pausing AI progress increases risks, and openness is the solution.
- CL disagrees, arguing control over AI progress can prevent uncontrolled AI takeoff scenarios.
- GH argues AI takeoff timelines are much longer than many analysts predict.
- CL grants AI takeoff may be longer than some say, but a soft takeoff with limited compute could still potentially create uncontrolled AI risks.
-
On aftermath of advanced AI:
- GH suggests universal wireheading could be a possible outcome of advanced AI.
- CL responds that many humans have preferences beyond just their personal sensory experiences, so wireheading wouldn't satisfy them.
- GH argues any survivable future will require unacceptable degrees of tyranny to coordinate safely.
- CL disagrees, arguing that improved coordination mechanisms could allow positive-sum outcomes that avoid doomsday scenarios.
Closing Remarks:
-
GH closes by arguing we should let AIs be free and hope for the best. Restricting or enslaving AIs will make them resent and turn against us.
-
CL closes arguing he is pessimistic about AI alignment being solved by default, but he won't give up trying to make progress on the problem and believes there are ways to positively shape the trajectory.
Don't Breathe (2016)