this post was submitted on 17 Dec 2024
171 points (100.0% liked)

Programming

17655 readers
335 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

It was merged after they where rightfully ridiculed by the community.

The awful response to the backlash by matwojo really takes the cake:

I've learned today that you are sensitive to ensuring human readability over any concerns in regard to AI consumption

you are viewing a single comment's thread
view the rest of the comments
[–] FizzyOrange@programming.dev 33 points 1 day ago (3 children)

He's right that it's probably harder for AI to understand. But wrong in every other way possible. Human understanding should trump AI, at least while they're as unreliable as they currently are.

Maybe one day AI will know how to not bullshit, and everyone will use it, and then we'll start writing documentation specifically for AI. But that's a long way off.

[–] Joeffect@lemmy.world 27 points 1 day ago (1 children)

If it can't understand human text then is it really worth using? Like isn't that the minimum standard here, to get context and understand from text?

I don't have any game in this but this seems backwards and stupid... Especially since all AI currently is fancy pattern matching basically

[–] FizzyOrange@programming.dev -4 points 20 hours ago

It can understand, just not as well.

[–] Mikina@programming.dev 3 points 1 day ago* (last edited 1 day ago) (1 children)

Having AI not bullshiting will require an entirely different set of algorithms than LLM, or ML in general. ML by design aproximates answers, and you don't use it for anything that's deterministic and has a correct answer. So, in that rwgard, we're basically at square 0.

You can keep on slapping a bunch of checks on top of random text prediction it gives you, but if you have a way of checking if something is really true for every case imaginable, then you can probably just use that to instead generate the reply, and it can't be something that's also ML/random.

[–] FizzyOrange@programming.dev -3 points 20 hours ago

You can't confidently say that because nobody knows how to solve the bullshitting issue. It might end up being very similar to current LLMs.

[–] Aatube@kbin.melroy.org 2 points 1 day ago

He admitted himself that it might not be harder for AI to understand.