Note: this post in marked as NSFW because it contains descriptions of sexual acts and non-specific descriptions of NSFL content for the purposes of classification.
As an admin of a furry instance that also allows nsfw furry artwork, I am always wary about the impact our content has on remote lemmy instances.
There is a delicate balance between allowing our users to post the content they enjoy and making sure that we won't be defederated from remote instances whose users might not appreciate our content.
I believe that users should not expect the ALL feed to curated to their interests, but even then I can understand that some users and instance admins might want to find a way to avoid certain type of content even on the ALL Feed.
In order to solve this problem I have tried to come up with a flexible and practical way to label NSFW content with more granularity.
The goal of this system is to more effectively classify NSFW communities in a way that is more practical than simply having a large one-size-fits-all label, while at the same time not increasing the overhead for instance admins to moderate this content
Advantages:
- Can be implemented in the current version of lemmy
- The burden of classification falls on the creator of the community or the admin of the instance that hosts it
- Absolutely no impact on performance
- It is granular, giving remote admins choice on which nsfw level they want to allow, hide or block
- Once set up it would work for existing and future communities without having to do any extra work
Disadvantages:
- It's a new spec that would have to be communicated
- It requires some sort of database trigger or other simple tooling to implement any sort of filtering functionality.
How it works:
It works by adding a specific 'tag' describing the level of NSFW content in the description of a community. Instance admins can create a database trigger to automatically hide or remove communities with a NSFW level above their preferred threshold
List of NSFW Levels:
NSFW-level-0: alcohol, drugs, gambling and other non-sexual content that you'd generally want to keep children away from.
NSFW-level-1: Artistic nudity, general nudity, adult humor, Mature artwork or images
NSFW-level-2: Vanilla sexual explicit content
NSFW-level-3: Common sexual interests such as foot fetish, food play, being tied to a bedpost, etc...
NSFW-level-4: Niche sexual interests such as bdsm, tentacles, inflation, etc...
NSFW-level-5: Heavy version and much more niche sexual interests such as heavy bdsm, breath play, rubber encasement, etc... This category might also contain contact with bodily fluids that some people might consider triggering
NSFW-level-6: Extreme content potentially involving fictional non-gorey mutilation, torture, non-con, as well as consumption of bodily fluids that some people might consider triggering
NSFW-level-7: Equivalent to NSFL. Might contain heavy or extreme versions of fictional mutilation, torture, non-con, etc... as well as content such as videos of accidents resulting in death, amputation, visible gore, etc.
NSFW-level-8: Extreme NSFL, usually irl content. I don't want you to imagine it, just know that it exists and as such needs to be added to the spec so that it can be automatically be blocked.
The description of these levels has been left intentionally vague so that it can grow organically.
The idea is thus that each NSFW community adds a classification to their description. This could even be done in from of an invisible markdown link such as this:
[](cw:nsfw-level-3:foot fetish, light retrains)
The text description is optional but could help give admins necessary clues about the content.
With communities tagged, instance admins can easily and automatically hide or remove communities above the threshold they decide.
Hiding a community is achieved by setting the "hidden" property in the database to 'true' for that community. Only people who have explicitly accessed the community via url and then subscribed to it will be able to see any of its content. It will not be visible in the All Timeline, Local Timeline or even search results.
There might be a situation in which the admins or moderators of nsfw community are so used to certain content that they classify it with a lower score. If that happens it's just a matter of lowering the tolerance for that instance and instead of filtering out at level 6 and upwards, you filter out starting at level 5 (for example).
Anyways, the core idea is that by classifying NSFW content with a numerical scale, instance admins can automatically filter out content above a certain threshold.