this post was submitted on 22 Dec 2024
513 points (96.2% liked)

Technology

60073 readers
3217 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] PlexSheep@infosec.pub 2 points 1 day ago

I understand some of the hype. LLMs are pretty amazing nowadays (though closedai is unethical af so don't use them).

I need to program complex cryptography code for university. Claude sonnet 3.5 solves some of the challenges instantly.

And it's not trivial stuff, but things like "how do I divide polynomials, where each coefficient of that polynomial is an element of GF(2^128)." Given the context (my source code), it adds it seamlessly, writes unit tests, and it just works. (That is important for AES-GCM, the thing TLS relies on most of the time .)

Besides that, LLMs are good at what I call moving words around. Writing cute little short stories in fictional worlds given some info material, or checking for spelling, or re-formulating a message into a very diplomatic nice message, so on.

On the other side, it's often complete BS shoehorning LLMs into things, because "AI cool word line go up".