this post was submitted on 31 Aug 2023
592 points (97.9% liked)
Technology
59223 readers
3421 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
it's crazy that "it's too hard :(" has become an acceptable justification for just ignoring the law within tech circles
It's more like the law is saying you must draw seven red lines, all of them strictly perpendicular, some with green ink and some with transparent ink.
It's not "virtually" impossible, it's literally impossible. If the law requires that it be possible then it's the law that must change. Otherwise it's simply a more complicated way of banning AI entirely, which means that some other jurisdiction will become the world leader in such things.
ok i guess you don't get to use private data in your models too bad so sad
why does the capitalistic urge to become "the world leader" in whatever technology-of-the-month is popular right now supersede a basic human right to privacy?
You seem to have an assumption that all AI models are intended for the sole benefit of corporations. What about medical models that can predict disease more accurately and more quickly than human doctors? Something like that could be hugely beneficial for society as a whole. Do you think we should just not do it because someone doesn't like that their data was used to train the model?
You seem to have the assumption that they're not. And that "helping society" is anything more than a happy accident that results from "making big profits".
A pretty big "what if" when every single model that's been tried for the purpose you suggest so far has either predicted based off the age of a medical imaging scan, or off the doctor's signature in the corner of one.
Are you asking me whether it's a good idea to give up the concept of "Privacy" in return for an image classifier that detects how much film grain there is in a given image?
It's not an assumption. There's academic researchers at universities working on developing these kinds of models as we speak.
I'm not wasting time responding to straw men.
Where does the funding for these models come from? Why are they willing to fund those models? And in comparison, why does so little funding go towards research into how to make neural networks more privacy-compatible?