71
submitted 1 year ago* (last edited 1 year ago) by pexavc@lemmy.world to c/opensource@lemmy.ml

Other samples:

Android: https://github.com/nipunru/nsfw-detector-android

Flutter (BSD-3): https://github.com/ahsanalidev/flutter_nsfw

Keras MIT https://github.com/bhky/opennsfw2

I feel it's a good idea for those building native clients for Lemmy implement projects like these to run offline inferences on feed content for the time-being. To cover content that are not marked NSFW and should be.

What does everyone think, about enforcing further censorship, especially in open-source clients, on the client side as long as it pertains to this type of content?

Edit:

There's also this, but it takes a bit more effort to implement properly. And provides a hash that can be used for reporting needs. https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX .

Python package MIT: https://pypi.org/project/opennsfw-standalone/

you are viewing a single comment's thread
view the rest of the comments
[-] pexavc@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

good point, but was just providing samples. I myself would gladly create a simple package for inferencing using a properly licensed model file.

Edit: Linked a MIT keras model for instance, also thanks for the tip didn't know about GPL / BSD relationship

[-] wildbus8979@sh.itjust.works 5 points 1 year ago

That person is wrong about the BSD-3 license, so it's not a very good "tip".

[-] pexavc@lemmy.world 2 points 1 year ago

oh i see, just saw your other comment as well

this post was submitted on 28 Aug 2023
71 points (90.8% liked)

Open Source

31043 readers
618 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS