Fediverse

17722 readers
1 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 5 years ago
MODERATORS
201
 
 

cross-posted from: https://mander.xyz/post/10764541

Do you people know of any Microbloging platform like Mastodon But with RSS Feed of My home feed like lemmy and peertube? On masto,misskey.pleroma,etc you can follow people with rss feed of there account but I want RSS Feed of my own home feed something like what we have on lemmy.

202
203
204
 
 

cross-posted from: https://kbin.social/m/fediverse/t/893150

Lemmy instances by number of communities with over 1 thousand (5 thousands, 10 thousands) of monthly users:

Lemmy.world: 122, 30 over 5K, 12 over 10K, 1 over 20K (Technology, Lemmy Shitpost, News, Politics, World News, Memes, Comic Strips, Microblog Memes, Ask Lemmy, Political Memes, Linux Memes, No Stupid Questions)
Lemmy.ml: 30, 7, 4 (Memes, World News, Ask Lemmy, Linux)
sh.itjust.works: 11, 5, none (White People Twitter, Games, Greentext, Funny, NonCredibleDefense)
Hexbear, 12, none, none (Chapo Trap House, The Dunk Tank, generic named communities)
Lemmy NSFW 10
Sopuli 8, 1 (Memes, Ukraine, Steam Deck, Meow_IRL, AnarchyChess, Aneurysm Posting, NYT gift articles, Map Enthusiasts)
Lemm.ee 7 (Movies and TV, Cyanide and Happiness, Artporn, YUROP, Movies, BrainWorms, Conservative)
Beehaw 6 (Technology, Gaming, FOSS, Politics, World News, Humor)
Lemmy.ca 6 (PC Gaming, Canada, Nostalgia, Cool Guides, Offbeat, Men's Liberation)
Midwest.social 5, 1 (The Onion, Lord of the memes, Memes, The Right Can't Meme, Religious Cringe)
SLRPNK 5, 1 (Climate, Memes, Solarpunk, Antiwork, tombstone of „TwoXChromosomes”)
/0 4, 1, 1 (Piracy, ADHD memes, Lefty Memes, Stable Diffusion Art)
Feddit.de 4, 1 (Europe, ich_iel, DACH, Deutschland)
Mander 3, 1, 1 (Science Memes, Science, Astronomy)
Programming.dev 3, 1 (Programmer Humor, Programming, Comics)
Lemmy.zip 3 (Global News, Technology, Gaming)
Feddit.uk 3 (UK, UK politics, Casual UK)
Lemmygrad 3 (GenZedong, World News, Comradeship)
Blåhaj Lemmy 1, 1, 1 (196)
Star Trek Website 2 (RISA, Star Trek)
ani.social 2 (Anime, Animemes)
Lemmon 1, 1 (Tails)
TTRPG Network 1, 1 (RPGMemes)
Lemdroid 1 (Android)
Reddthat 1 (Memes)
KDE 1 (KDE)
Futurology.today 1 (Futurology)
Aussie Zone 1 (Australia)
Lemmy.one 1 (Privacy Guides)
Lemmy Fan 1 (Weird News)
Diagon Lemmy 1 (The Leaky Cauldron)
tchncs 1 (Right to Repair)
Smeargle 1 (mirror of Hacker News)

  • Lemmy.world has got a plurality of 1K/mo Lemmy communities. (122 vs 140) Lemmy.world and Lemmy.ml together make a majority of 1K/mo Lemmy communities (152)
  • Lemmy.world has got a majority of 5K/mo Lemmy communities (30 vs 22)
  • Lemmy.world has got a majority of 10K/mo Lemmy communities (12 vs 7)
  • Lemmy.world has got only 20K/Mo Lemmy community
  • Only 1K/mo national "sublemmies" are Canadian, Australian, British and German. Only non-English 1K/mo "sublemmies" are German.
205
 
 

Hey fellow users of the Fediverse, instance admins and platform developers. I'd like to see better means of handling adult content on the Fediverse. I'd like to spread a bit of awareness and request your opinions and comments.

Summary: A more nuanced concept is a necessary step towards creating a more inclusive and empowering space for adolescents while also catering to adult content. I suggest extending the platforms with additional tools for instance admins, content-labels and user roles. Further taking into account the way the Fediverse is designed, different jurisdictions and shifting the responsibility to the correct people. The concept of content-labels can also aid moderation in general.

The motivation:

We are currently disadvantaging adolescents and making life hard for instance admins. My main points:

  1. Our platforms shouldn't only cater to adults. I don't want to delve down into providing a kids-safe space, because that's different use-case and a complex task. But there's quite some space in-between. Young people also need places on the internet where they can connect, try themselves and slowly grow and approach the adult world. I think we should be inclusive and empower the age-group - lets say of 14-17 yo people. Currently we don't care. And I'd like that to change. It'd also help people who are parents, teachers and youth organizations.

  2. But the platform should also cater to adults. I'd like to be able to discuss adult topics. Since everything is mixed together... For example if I were to share my experience on adult stuff, it'd make me uncomfortable if I knew kids are probably reading that. That restricts me in what I can do here.

  3. Requirements by legislation: Numerous states and countries are exploring age verification requirements for the internet. Or it's already mandatory but can't be achieved with our current design.

  4. Big platforms and porn sites have means to circumvent that. Money and lawyers. It's considerably more difficult for our admins. I'm pretty sure they'd prosecute me at some point if I'd try to do the same. I don't see how I could legally run my own instance at all without overly restricting it with the current tools I have available.

Some laws and proposals

Why the Fediverse?

The Fediverse strives to be a nice space. A better place than just a copy of the established platforms including their issues. We should and can do better. We generally care for people and want to be inclusive. We should include adolecents and empower/support them, too.

I'd argue it's easy to do. The Fediverse provides some unique advantages. And currently the alternative is to lock down an instance, overblock and rigorously defederate. Which isn't great.

How?

There are a few design parameters:

  1. We don't want to restrict other users' use-cases in the process.
  2. The Fediverse connects people across very different jurisdictions. There is no one-size-fits-all solution.
  3. We can't tackle an impossibly big task. But that shouldn't keep up from doing anything. My suggestion is to not go for a perfect solution and fail in the process. But to implement something that is considerably better than the current situation. It doesn't need to be perfect and water-tight to be a big step in the right direction and be of some good benefit for all users.

With that in mind, my proposal is to extend the platforms to provide additional tools to the individual instance admins.

Due to (1) not restricting users, the default instance setting should be to allow all content. The status quo is unchanged, we only offer optional means to the instance admins to tie down the place if they deem appropriate. And this is a federated platform. We can have instances that cater to adults and some that also cater to young people in parallel. This would extend the Fediverse, not make it smaller.

Because of (2) the different jurisdictions, the responsibility has to be with the individual instance admins. They have to comply with their legislation, they know what is allowed and they probably also know what kind of users they like to target with their instance. So we just give a configurable solution to them without assuming or enforcing too much.

Age-verification is hard. Practically impossible. The responsibility for that has to be delegated and handled on an instance level. We should stick to attaching roles to users and have the individual instance deal with it, come up with a way how people attain these roles. Some suggestions: Pull the role "adult" from OAuth/LDAP. Give the role to all logged-in users. Have admins and moderators assign the roles.

The current solution for example implemented by LemmyNSFW is to preface the website with a popup "Are you 18?... Yes/No". I'd argue this is a joke and entirely ineffective. We can skip a workaround like that, as it doesn't comply with what is mandated in lots of countries. We're exactly as well off with or without that popup in my country. And it's redundant. We already have NSFW on the level of individual posts. And we can do better anyways. (Also: "NSFW" and "adult content" aren't the same thing.)

I think the current situation with LemmyNSFW, which is blocked by most big instances, showcases the current tools don't work properly. The situation as is leads to defederation.

Filtering and block-listing only works if people put in the effort and tag all the content. It's probably wishful thinking that this becomes the standard and happens to a level that is satisfactory. We probably also need allow-listing to compensate for that. Allow-list certain instances and communities that are known to only contain appropriate content. And also differentiate between communities that do a good job and are reliably providing content labels. Allow-listing would switch the filtering around and allow authorized (adult) users to bypass the list. There is an option to extend upon this at a later point to approach something like a safe space in certain scenarios. Whether this is for kids or adults who like safe-spaces.

Technical implementation:

  • Attach roles to user accounts so they can later be matched to content labels. (ActivityPub actors)
  • Attach labeling to individual messages. (ActivityPub objects)

This isn't necessarily a 1:1 relation. A simple "18+" category and a matching flag for the user account would be better than nothing. But legislation varies on what's appropriate. Ultimately I'd like to see more nuanced content categories and have the instance match which user group can access which content. A set of labels for content would also be useful for other moderation purposes. Currently we're just able to delete content or leave it there. But the same concept can also flag "fake-news" and "conspiracy theories" or "trolling" and make the user decide if they want to have that displayed to them. Currently this is up to the moderators, and they're just given 2 choices.

For the specific categories we can have a look at existing legislation. Some examples might include: "nudity", "pornography", "gambling", "extremism", "drugs", "self-harm", "hate", "gore", "malware/phishing". I'd like to refrain from vague categories such as "offensive language". That just leads to further complications when applying it. Categories should be somewhat uncontroversial, comprehensible to the average moderator and cross some threshold appropriate to this task.

These categories need to be a well-defined set to be useful. And the admins need a tool to map them to user roles (age groups). I'd go ahead and also allow the users to filter out categories on top, in case they don't like hate, trolling and such, they can choose to filter it out. And moderators also get another tool in addition to the ban hammer for more nuanced content moderation.

  • Instance settings should include: Show all content, (Blur/spoiler content,) Restrict content for non-logged-in users. Hide content entirely from the instance. And the user-group <-> content-flag mappings.

  • Add the handling of user-groups and the mapping to content-labels to the admin interface.

  • Add the content-labels to the UI so the users can flag their content.

  • Add the content-labels to the moderation tools

  • Implement allow-listing of instances and communities in a separate task/milestone.

  • We should anticipate age-verification getting mandatory in more and more places. Other software projects might pick up on it or need to implement it, too. This solution should tie into that. Make it extensible. I'd like to pull user groups from SSO, OAuth, OIDC, LDAP or whatever provides user roles and is supported as an authentication/authorization backend.

Caveats:

  • It's a voluntary effort. People might not participate enough to make it useful. If most content doesn't include the appropriate labels, block-listing might prove ineffective. That remains to be seen. Maybe we need to implement allow-listing first.
  • There will be some dispute, categories are a simplification and people have different judgment on exact boundaries. I think this proposal tries to compensate for some of this and tries not to oversimplify things. Also I believe most of society roughly agrees on enough of the underlying ethics.
  • Filtering content isn't great and can be abused. But it is a necessary tool if we want something like this.

🅭🄍 This text is licensed “No Rights Reserved”, CC0 1.0: This work has been marked as dedicated to the public domain.

206
 
 

i think the reddit IPO is working

207
 
 

Seven years later, Kyle’s argument is that AirSpace has turned into what he now calls Filterworld, a phrase he uses to describe how algorithmic recommendations have become one of the most dominating forces in culture, and as a result, have pushed society to converge on a kind of soulless sameness in its tastes.

208
 
 

Phanpy releases Catch-up, a new easy way to catch up on all the posts you've missed since the last time you logged in. Threads' ActivityPub development continues.

209
 
 

Hello!

I am sunaurus, the head admin of lemm.ee. Ever since I created my instance, I have been following a lot of public and private discussion channels between different parties involved with Lemmy. As I’m sure many others have also noticed, the discussions in such channels sometimes get heated, and in fact recently, I feel like there has been a constant trend in these discussions towards a lot of demands, hostility, negativity, and a general lack of empathy between different participants in the Lemmy network.

I am writing this post for a few reasons:

  1. I would like add a bit of positivity by expressing my gratitude towards every single person who has helped improve Lemmy.
  2. I want to speak up in defense of different people who have been receiving negativity lately.
  3. There are a few false rumors spreading on Lemmy, which I would like to try and counteract with very simple evidence.
  4. I want to remind everybody that at the end of the day, all of us care about building and improving Lemmy. We all have the same goal, and it’s too easy to lose sight of that.

I will split up what I want to say in this post by different user groups - users, mods, admins and developers. I understand that many people belong to several (or even all) of these groups, but I just want to highlight the value of, and express my gratitude to each group separately.

Users

At the end of the day, Lemmy would not be worth anything without the users. Users bring Lemmy to life by posting great content, getting involved in discussions in comments, helping surface interesting content for others through voting and even keeping the platform clean through reports. I am extremely thankful for all the users who have given me so much enjoyment on this platform.

I believe that users often get treated unfairly on Lemmy based on what instance they are participating from. I’m sure so many of you have noticed comments around Lemmy along the lines of “Oh, another user from , I’m going to completely ignore your stupid takes”. I’ve also many cases of people treating users as second-class citizen if they are not on the same instance - for example, I’ve seen users who are active and valuable participants in communities on another instance receive comments like “why are you participating in our discussions, go back to your own instance”. In my opinion this is completely counterproductive to the whole idea of federation. On a human level, I can understand it - you’re far more likely to notice or remember what instance somebody is posting from if you have a negative experience. As a result, as time goes by, people tend to develop negative views of each instance, despite potentially having had many positive interactions with other users of those same instances. The message I want to put out here is that instances, especially bigger ones, are not monoliths - do not judge users based on what instance they are browsing Lemmy from, judge them by their actual words and actions.

Mods

There are some excellent communities already on Lemmy, and these communities are all continuously being built up and maintained by mods. Mods put in huge amounts of their free time and energy in order to provide spaces for all Lemmy users. They form the first line of defense against bad actors, they keep communities alive and often receive no praise, only criticism. I am very grateful to everybody who has dedicated time to building communities on Lemmy.

Users rarely notice the lengths mods go to in order to keep communities running smoothly - mods more often than not only get noticed when users disagree with some mod actions. I believe mods deserve a lot better than this. Constructive criticism can of course be useful to improve communities, but it must be balanced with empathy and kindness towards people who have been putting in effort to provide something for users. Remember that there is another human being reading your words when you start writing about the mods of any particular community. Users who are not happy with mods of a certain community always have the opportunity to start their own community and run it as they like.

Admins

Admins provide two main key functions for the network:

  1. Taking care of the actual infrastructure of Lemmy
  2. Working as a higher level defense against bad actors, in cases where mods are not enough

I can tell from my own experience that being an admin of a bigger instance requires constant energy and attention. I don’t believe that there is a single medium-to-big instance where the admins have not put in hundreds (if not thousands) of hours of their free time, as well as in many cases, probably their own money. This is a service which admins provide for free, and it is necessary in order to keep the Lemmy network healthy. I have endless respect for anybody who is willing to put themselves in the position of a Lemmy admin.

I have seen awful messages towards admins from all the other groups listed here, including other admins. These messages range from condescending and rude, to downright hateful. I have seen admins treated as useless and their work taken for granted. I have seen people getting frustrated with admins for not spending every waking minute on Lemmy. I have seen some users consistently spreading provably false rumors about particular admins in an effort to tarnish their reputation on Lemmy.

Before you take out frustration on admins, please remember that they are also humans who have been working tirelessly to improve Lemmy in their own way.

Also, a reminder: the absolute best feature of Lemmy is that users are free to pick their instance - and as a result, users are also free to pick their admins. Even more than that, users can always become their own admins by spinning up their own instance. Yes, this requires dedication, effort, and research, but that’s exactly my point. It’s not easy running an instance, and mistreating people who do this as a free service is completely unacceptable.

Developers

Lemmy development has been lead by a few key maintainers, with a massive amount of smaller contributors. The software is constantly being improved at a very good pace, and everybody is able to benefit from this effort at no cost whatsoever. I am extremely grateful to everybody who has participated in the development of the Lemmy software, and other related software, as without you folks, none of us would even be here now.

There seems to be a huge amount of people with very little appreciation of the work that has gone into the software. I’m sure many of you have seen countless messages where people express that the devs should be doing more in one way or another. “They should work faster”, “they should prioritize this obviously most important feature”, “they should be available 24/7 to offer support”, etc. I just want to take a moment here and acknowledge what core maintainers have already done for Lemmy:

  • Years worth of work on the code itself
  • Offering support to the community and other admins
  • Reviewing literally thousands of pull requests on GitHub
  • Acting fast in stressful situations where the Lemmy network has been overloaded
  • Not abandoning the project in the face of constant hateful users
  • Sacrificing literally hundreds of thousands of euros in missed salaries which they could have been getting if they were working for a tech company instead of working on Lemmy

I also want to take this moment to discredit some rumors which I have seen repeated too many times:

  1. Rumor: Lemmy devs do not accept outside code contributions

This is completely false - the maintainers are completely open to (and even constantly asking for) contributions. When somebody starts contributing, they will receive support and code reviews very quickly. I can tell you that I have experienced this myself several times, but that’s anecdotal, so let me also provide evidence:

a. Contributors list for the Lemmy backend: https://github.com/LemmyNet/lemmy/graphs/contributors

b. Contributors list for Lemmy UI: https://github.com/LemmyNet/lemmy-ui/graphs/contributors

Both of these lists include 100 different names, and that’s only because GitHub literally caps these pages to 100 users. Actually, the amount of different contributors is even bigger. If Lemmy devs did not accept and encourage outside contributions, then there would be no way for these lists to be so big.

  1. Rumor: Lemmy devs work too slowly

This is an extremely entitled and frankly stupid claim. I try to keep on top of the changes made in the Lemmy repo, and let me tell you, the pace of improvement is very good.

I very firmly believe that if the network started downgrading to Lemmy versions from ~8 months ago, the whole network would just collapse, as none of the instances could keep up with the current volume. That is to say, we have come an extremely long way since last summer alone.

Let me provide some more evidence. Take a look at the Pulse page for the Lemmy backend on GitHub: https://github.com/LemmyNet/lemmy/pulse. As of writing this, Lemmy devs have merged 18 pull requests in the week leading up to this post - that’s an average of 2.5 merged PRs per day. This is extremely good for a project with a small underfunded team.

  1. Rumor: Lemmy devs do not prioritize the important issues

There are two sides to this. First of all, there are endless users who turn to the Lemmy devs with what they believe is the most important issue and should immediately be prioritized - the problem is that almost none of these endless users have the same view of what the most important issue actually is! In that sense, it’s literally impossible to please everybody, because everybody wants different things.

On the other hand, even when Lemmy devs do prioritize things which some users have been desperately asking for, I have on several occasions seen a dismissive response along the lines of “too little too late”. Basically, the demands made are often unrealistic and impossible to meet.

If you are somebody who feels like Lemmy devs are not doing enough, I would ask you to please take a step back, look at the actual contributions which they have made, and consider how you yourself would feel if after making such a massive contribution, you would still need to listen to countless strangers on the internet tell you how you’re not good enough in their opinion.

Conclusion

Lastly, I am very thankful to anybody who took the time to read to the end of this post. Again, my goal is to try and defuse some of the hostility, as well as to put out a message of gratitude and positivity. I am very interested in the success of Lemmy as a whole, and that is much easier to achieve and maintain if we all work together. Thank you, I hope you're doing well, and have a nice weekend!

210
211
212
 
 

cross-posted from: https://lemmy.ml/post/12904730

It seems they’re not far from finishing and have the first few chapters up for early access and feedback. It could be the go to text for learning the protocol.

213
155
submitted 8 months ago* (last edited 8 months ago) by deadsuperhero@lemmy.ml to c/fediverse@lemmy.ml
 
 

Highlighting the recent report of users and admins being unable to delete images, and how Trust & Safety tooling is currently lacking.

214
 
 

cross-posted from: https://literature.cafe/post/7623718

cross-posted from: https://literature.cafe/post/7623713

I made a blog post discussing my biggest issues with Lemmy and why I am kind of done with it as a software.

215
 
 

This article will describe how lemmy instance admins can purge images from pict-rs.

Nightmare on Lemmy St - A GDPR Horror Story
Nightmare on Lemmy Street (A Fediverse GDPR Horror Story)

This is (also) a horror story about accidentally uploading very sensitive data to Lemmy, and the (surprisingly) difficult task of deleting it.

216
 
 

Announcement of an Open Science Network. NodeBB joins the fediverse.

217
218
 
 

Anyone aware of a conversations fork with support for unified push notifications? Or a similar xmpp android app with omemo (just the same as conversations' support) and unified push notifications support, available through the official f-droid repor or a f-droid repo if not available from the official ones?

BTW, I noticed !xmpp@lemmy.ml community was locked. Any particular reason for that?

Also, Converstions requests to set unrestricted use of battery, to use battery under background without restrictions. So it seems unified push notifications would help, though this github issue sort of indicates unified push notifications wouldn't help, so it just tells me there's no intention to include support for it on Conversations, but not that it wouldn't help save battery.

219
 
 

cross-posted from: https://discuss.online/post/5772572

The current state of moderation across various online communities, especially on platforms like Reddit, has been a topic of much debate and dissatisfaction. Users have voiced concerns over issues such as moderator rudeness, abuse, bias, and a failure to adhere to their own guidelines. Moreover, many communities suffer from a lack of active moderation, as moderators often disengage due to the overwhelming demands of what essentially amounts to an unpaid, full-time job. This has led to a reliance on automated moderation tools and restrictions on user actions, which can stifle community engagement and growth.

In light of these challenges, it's time to explore alternative models of community moderation that can distribute responsibilities more equitably among users, reduce moderator burnout, and improve overall community health. One promising approach is the implementation of a trust level system, similar to that used by Discourse. Such a system rewards users for positive contributions and active participation by gradually increasing their privileges and responsibilities within the community. This not only incentivizes constructive behavior but also allows for a more organic and scalable form of moderation.

Key features of a trust level system include:

  • Sandboxing New Users: Initially limiting the actions new users can take to prevent accidental harm to themselves or the community.
  • Gradual Privilege Escalation: Allowing users to earn more rights over time, such as the ability to post pictures, edit wikis, or moderate discussions, based on their contributions and behavior.
  • Federated Reputation: Considering the integration of federated reputation systems, where users can carry over their trust levels from one community to another, encouraging cross-community engagement and trust.

Implementing a trust level system could significantly alleviate the current strains on moderators and create a more welcoming and self-sustaining community environment. It encourages users to be more active and responsible members of their communities, knowing that their efforts will be recognized and rewarded. Moreover, it reduces the reliance on a small group of moderators, distributing moderation tasks across a wider base of engaged and trusted users.

For communities within the Fediverse, adopting a trust level system could mark a significant step forward in how we think about and manage online interactions. It offers a path toward more democratic and self-regulating communities, where moderation is not a burden shouldered by the few but a shared responsibility of the many.

As we continue to navigate the complexities of online community management, it's clear that innovative approaches like trust level systems could hold the key to creating more inclusive, respectful, and engaging spaces for everyone.

Related

220
 
 

If Stack Overflow taught us anything, it's that

"people will do anything for fake internet points"

Source: Five years ago, Stack Overflow launched. Then, a miracle occurred.

Ever noticed how people online will jump through hoops, climb mountains, and even summon the powers of ancient memes just to earn some fake digital points? It's a wild world out there in the realm of social media, where karma reigns supreme and gamification is the name of the game.

But what if we could harness this insatiable thirst for validation and turn it into something truly magnificent? Imagine a social media platform where an army of monkeys tirelessly tags every post with precision and dedication, all in the pursuit of those elusive internet points. A digital utopia where every meme is neatly categorized, every cat video is meticulously labeled, and every shitpost is lovingly sorted into its own little corner of the internet.

Reddit tried this strategy to increase their content quantity, but alas, the monkeys got a little too excited and flooded the place with reposts and low-effort bananas. Stack Overflow, on the other hand, employed their chimp overlords for moderation and quality control, but the little guys got a bit too overzealous and started scaring away all the newbies with their stern glares and downvote-happy paws.

But fear not, my friends! For we shall learn from the mistakes of our primate predecessors and strike the perfect balance between order and chaos, between curation and creativity. With a leaderboard showcasing the top users per day, week, month, and year, the competition would be fierce, but not too fierce. Who wouldn't want to be crowned the Tagging Champion of the Month or the Sultan of Sorting? The drive for recognition combined with the power of gamification could revolutionize content curation as we know it, without sacrificing the essence of what makes social media so delightfully weird and wonderful.

And the benefits? Oh, they're endless! Imagine a social media landscape where every piece of content is perfectly tagged, allowing users to navigate without fear of stumbling upon triggering or phobia-inducing material. This proactive approach can help users avoid inadvertently coming across content that triggers phobias, traumatic events, or other sensitive topics. It's like a digital safe haven where you can frolic through memes and cat videos without a care in the world, all while basking in the glory of a well-organized and properly tagged online paradise.

So next time you see someone going to great lengths for those fake internet points, just remember - they might just be part of the Great Monkey Tagging Army, working tirelessly to make your online experience safer, more enjoyable, and infinitely more entertaining. Embrace the madness, my friends, for in the chaos lies true innovation! But not too much chaos, mind you – just the right amount to keep things interesting.

Related

221
222
223
 
 

Just wanted to share this interview we just put out with Jaz Michael-King, the guy that founded IFTAS. They're doing some really wild stuff trying to wrangle in harassment, spam, objectionable content, and CSAM, and are looking to provide tooling for the Fediverse, as well as trauma resources and training for moderators.

Really fascinating interview, I learned a lot by talking to him.

224
 
 

Interesting thread that summarizes it well.

225
13
submitted 8 months ago* (last edited 8 months ago) by kixik@lemmy.ml to c/fediverse@lemmy.ml
 
 

https://disroot.org provides several decentralized federated services, as email and xmpp, besides other cloud services as well... But not sure if asking here is right or not, but don't know anywhere to ask either...

Is it having a license issue, does anyone know about it? Any status updates?

Websites prove their identity via certificates. LibreWolf does not trust this site because it uses a certificate that is not valid for disroot.org. The certificate is only valid for p1lg502277.dc01.its.hpecorp.net.
 
Error code: SSL_ERROR_BAD_CERT_DOMAIN

But also:

disroot.org has a security policy called HTTP Strict Transport Security (HSTS), which means that LibreWolf can only connect to it securely. You can’t add an exception to visit this site.

The issue is most likely with the website, and there is nothing you can do to resolve it. You can notify the website’s administrator about the problem.

I also tested with ungoogled chromium and pretty similar thing...

Anyonea aware, and also about disroot saying on this?

Edit (sort of understood already, no issue with disroot at all): The issue only shows up under the office VPN. It seems like disroot is not recognizing the office's cert...

Edit: Solved. Yes it's the office replacing the original cert with its own, as someone suggested. Thanks to all.

view more: ‹ prev next ›