[-] yuu@group.lt 4 points 1 year ago

just use a community-lead or non-profit foundation lead distro: NixOS (better than silverblue/kinoite in all aspects they try to sell), Arch, or Debian.

For professional usage, you generally go Ubuntu, or some RHEL derivative.

1
submitted 1 year ago by yuu@group.lt to c/emacs@lemmy.ml

Originally posted on https://emacs.ch/@yantar92/110571114222626270

Please help collecting statistics to optimize Emacs GC defaults

Many of us know that Emacs defaults for garbage collection are rather ancient and often cause singificant slowdowns. However, it is hard to know which alternative defaults will be better.

Emacs devs need help from users to obtain real-world data about Emacs garbage collection. See the discussion in https://yhetil.org/emacs-devel/87v8j6t3i9.fsf@localhost/

Please install https://elpa.gnu.org/packages/emacs-gc-stats.html and send the generated statistics via email to emacs-gc-stats@gnu.org after several weeks.

[-] yuu@group.lt 1 points 1 year ago* (last edited 1 year ago)

When I was packaging Flatpaks, the greatest downside is

No built in package manager

There is a repo with shared dependencies, but it is very few. So needs to package all the dependencies... So, I personally am not interested in packaging for flatpak other than in very rare occasions... Nix and Guix are definitely better solutions (except the isolation aspect, which is not a feature, you need to do it manually), and one can use at many distros; Nix even on MacOS!

[-] yuu@group.lt 0 points 1 year ago

Some of them will detect if using virtualization. For example http://safeexambrowser.org/ by ETH Zurich

Ironically enough, it is free software https://github.com/SafeExamBrowser

3
submitted 2 years ago by yuu@group.lt to c/devops@lemmy.ml

cross-posted from !softwareengineering@group.lt: https://group.lt/post/46385

Adopting DevOps practices is nowadays a recurring task in the industry. DevOps is a set of practices intended to reduce the friction between the software development (Dev) and the IT operations (Ops), resulting in higher quality software and a shorter development lifecycle. Even though many resources are talking about DevOps practices, they are often inconsistent with each other on the best DevOps practices. Furthermore, they lack the needed detail and structure for beginners to the DevOps field to quickly understand them.

In order to tackle this issue, this paper proposes four foundational DevOps patterns: Version Control Everything, Continuous Integration, Deployment Automation, and Monitoring. The patterns are both detailed enough and structured to be easily reused by practitioners and flexible enough to accommodate different needs and quirks that might arise from their actual usage context. Furthermore, the patterns are tuned to the DevOps principle of Continuous Improvement by containing metrics so that practitioners can improve their pattern implementations.


The article does not describes but actually identified and included 2 other patterns in addition to the four above (so actually 6):

  • Cloud Infrastructure, which includes cloud computing, scaling, infrastructure as a code, ...
  • Pipeline, "important for implementing Deployment Automation and Continuous Integration, and segregating it from the others allows us to make the solutions of these patterns easier to use, namely in contexts where a pipeline does not need to be present."

Overview of the pattern candidates and their relation

The paper is interesting for the following structure in describing the patterns:

  • Name: An evocative name for the pattern.
  • Context: Contains the context for the pattern providing a background for the problem.
  • Problem: A question representing the problem that the pattern intends to solve.
  • Forces: A list of forces that the solution must balance out.
  • Solution: A detailed description of the solution for our pattern’s problem.
  • Consequences: The implications, advantages and trade-offs caused by using the pattern.
  • Related Patterns: Patterns which are connected somehow to the one being described.
  • Metrics: A set of metrics to measure the effectiveness of the pattern’s solution implementation.
1
submitted 2 years ago* (last edited 2 years ago) by yuu@group.lt to c/libre_culture@lemmy.ml

cross-posted from c/softwareengineering@group.lt: https://group.lt/post/44632

This kind of scaling issue is new to Codeberg (a nonprofit free software project), but not to the world. All projects on earth likely went through this at a certain point or will experience it in the future.

When people like me talk about scaling... It's about increasing computing power, distributed storage, replicated databases and so on. There are all kinds of technology available to solve scaling issues. So why, damn, is Codeberg still having performance issues from time to time?

...we face the "worst" kind of scaling issue in my perception. That is, if you don't see it coming (e.g. because the software gets slower day by day, or because you see how the storage pool fill up). Instead, it appears out of the blue.

The hardest scaling issue is: scaling human power.

Configuration, Investigation, Maintenance, User Support, Communication – all require some effort, and it's not easy to automate. In many cases, automation would consume even more human resources to set up than we have.

There are no paid night shifts, not even payment at all. Still, people have become used to the always-available guarantees, and demand the same from us: Occasional slowness in the evening of the CET timezone? Unbearable!

I do understand the demand. We definitely aim for a better service than we sometimes provide. However, sometimes, the frustration of angry social-media-guys carries me away...

two primary blockers that prevent scaling human resources. The first one is: trust. Because we can't yet afford hiring employees that work on tasks for a defined amount of time, work naturally has to be distributed over many volunteers with limited time commitment... second problem is a in part technical. Unlike major players, which have nearly unlimited resources available to meet high demand, scaling Codeberg's systems...

TLDR: sustainability issues for scaling because Codeberg is a nonprofit with much limited resources, mainly human resources, in face of high demand. Non-paid volunteers do all the work. So needs more people working as volunteers, and needs more money.

yuu

joined 2 years ago