163
submitted 1 year ago by yuunikki@lemmy.world to c/asklemmy@lemmy.ml

Climate is fucked, animals continue to go extinct even more, our money will be worth nothing the coming years.. What motivation do I even have to care to keep going? The world is ran and basically owned by corrupt rich people, there's poverty, war, etc. It makes me sick to my stomach the way to world is. So I ask, why bother anymore?

you are viewing a single comment's thread
view the rest of the comments
[-] tegs_terry@feddit.uk 2 points 1 year ago

What do you mean by alignment?

AI alignment is a field that attempts to solve the problem of "how do you stop something with the ability to deceive, plan ahead, seek and maintain power, and parallelize itself from just doing that to everything".

https://aisafety.info/

AI alignment is "the problem of building machines which faithfully try to do what we want them to do". An AI is aligned if its actual goals (what it's "trying to do") are close enough to the goals intended by its programmers, its users, or humanity in general. Otherwise, it’s misaligned. The concept of alignment is important because many goals are easy to state in human language terms but difficult to specify in computer language terms. As a current example, a self-driving car might have the human-language goal of "travel from point A to point B without crashing". "Crashing" makes sense to a human, but requires significant detail for a computer. "Touching an object" won't work, because the ground and any potential passengers are objects. "Damaging the vehicle" won't work, because there is a small amount of wear and tear caused by driving. All of these things must be carefully defined for the AI, and the closer those definitions come to the human understanding of "crash", the better the AI is "aligned" to the goal that is “don't crash”. And even if you successfully do all of that, the resulting AI may still be misaligned because no part of the human-language goal mentions roads or traffic laws. Pushing this analogy to the extreme case of an artificial general intelligence (AGI), asking a powerful unaligned AGI to e.g. “eradicate cancer” could result in the solution “kill all humans”. In the case of a self-driving car, if the first iteration of the car makes mistakes, we can correct it, whereas for an AGI, the first unaligned deployment might be an existential risk.

[-] tegs_terry@feddit.uk 2 points 1 year ago
this post was submitted on 19 Jul 2023
163 points (81.7% liked)

Asklemmy

43760 readers
1212 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS