this post was submitted on 25 Jul 2024
1142 points (98.4% liked)
memes
10285 readers
2092 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
Sister communities
- !tenforward@lemmy.world : Star Trek memes, chat and shitposts
- !lemmyshitpost@lemmy.world : Lemmy Shitposts, anything and everything goes.
- !linuxmemes@lemmy.world : Linux themed memes
- !comicstrips@lemmy.world : for those who love comic stories.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
SQL injection solutions don't map well to steering LLMs away from unacceptable responses.
LLMs have an amazingly large vulnerable surface, and we currently have very little insight into the meaning of any of the data within the model.
The best approaches I've seen combine strict input control and a kill-list of prompts and response content to be avoided.
Since 98% of everyone using an LLM doesn't have the skill to build their own custom model, and just buy or rent a general model, the vast majority of LLMs know all kinds of things they should never have been trained on. Hence the dirty limericks, racism and bomb recipes.
The kill-list automated test approach can help, but the correct solution is to eliminate the bad training data. Since most folks don't have that expertise, it tends not to happen.
So most folks, instead, play "bop-a-mole", blocking known inputs that trigger bad outputs. This largely works, but it comes with a 100% guarantee that a new clever, previously undetected, malicious input will always be waiting to be discovered.
Right, it's something like trying to get a three year old to eat their peas. It might work. It might also result in a bunch of peas on the floor.
Of course because punctuation isn't going to break a table, but the point is that it's by no means an unforseen or unworkable problem. Anyone could have seen that coming, for example basic SQL and a college class in Java is the extent of my comp sci knowledge and I know about it.
Yeah. It's achievable, just usually not in the ways currently preferred (untrained staff spin it up and hope for the best), and not for the currently widely promised low costs (with no one trained in data science on staff at the customer site).
For a bunch of use cases the lack of security is currently an acceptable trade off.