this post was submitted on 27 Jan 2024
13 points (78.3% liked)

Futurology

1776 readers
200 users here now

founded 1 year ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] sbv@sh.itjust.works 3 points 9 months ago

So they're saying ai is software?

Maybe Volkswagen will start using it in their emissions control systems.

[–] possiblylinux127@lemmy.zip 1 points 9 months ago

Great, we are all going to die

[–] mateomaui@reddthat.com 1 points 9 months ago (1 children)

Just… don’t hook it up to the defense grid.

[–] possiblylinux127@lemmy.zip 1 points 9 months ago (1 children)
[–] mateomaui@reddthat.com 2 points 9 months ago (1 children)

Alright, I’ll be out back digging the bomb shelter.

[–] possiblylinux127@lemmy.zip 1 points 9 months ago* (last edited 9 months ago) (1 children)

Its too late for that honestly

[–] mateomaui@reddthat.com 2 points 9 months ago

Alright, I’ll switch to digging holes for the family burial ground.

[–] Daxtron2@startrek.website 1 points 9 months ago (1 children)

LLM trained on adversarial data, behaves in an adversarial way. Shocking

[–] CanadaPlus@futurology.today 0 points 9 months ago

Yeah. For reference, they made a model with a back door, and then trained it to not respond in a backdoored way when it hasn't been triggered. It worked but it didn't effect the back door much, and that means that it technically was acting more differently - and therefore deceptively - when not triggered.

Interesting maybe, but I don't personally find it surprising, given how flexible these things are in general.