yuu

joined 2 years ago
[–] yuu@group.lt 4 points 2 years ago

just use a community-lead or non-profit foundation lead distro: NixOS (better than silverblue/kinoite in all aspects they try to sell), Arch, or Debian.

For professional usage, you generally go Ubuntu, or some RHEL derivative.

 

Originally posted on https://emacs.ch/@yantar92/110571114222626270

Please help collecting statistics to optimize Emacs GC defaults

Many of us know that Emacs defaults for garbage collection are rather ancient and often cause singificant slowdowns. However, it is hard to know which alternative defaults will be better.

Emacs devs need help from users to obtain real-world data about Emacs garbage collection. See the discussion in https://yhetil.org/emacs-devel/87v8j6t3i9.fsf@localhost/

Please install https://elpa.gnu.org/packages/emacs-gc-stats.html and send the generated statistics via email to emacs-gc-stats@gnu.org after several weeks.

[–] yuu@group.lt 1 points 2 years ago* (last edited 2 years ago) (1 children)

When I was packaging Flatpaks, the greatest downside is

No built in package manager

There is a repo with shared dependencies, but it is very few. So needs to package all the dependencies... So, I personally am not interested in packaging for flatpak other than in very rare occasions... Nix and Guix are definitely better solutions (except the isolation aspect, which is not a feature, you need to do it manually), and one can use at many distros; Nix even on MacOS!

[–] yuu@group.lt 0 points 2 years ago (2 children)

Some of them will detect if using virtualization. For example http://safeexambrowser.org/ by ETH Zurich

Ironically enough, it is free software https://github.com/SafeExamBrowser

 

I suppose it only makes sense to raise awareness on the benefits of the freely licensed software and services from the fediverse over the dangerous and unethical proprietary services in existence such as Reddit now going to IPO. That happened to Twitter->Mastodon, can happen to Reddit->Lemmy as well.

I suppose as well that the users most likely to be open to the idea would be the free software, culture users to try it. Besides, an effort on content creation and content creators to make it an attractive place.

What are your thoughts? What were the efforts so far? What are the challenges? Is it so hard to make people migrate?

 

cross-posted from !softwareengineering@group.lt: https://group.lt/post/46385

Adopting DevOps practices is nowadays a recurring task in the industry. DevOps is a set of practices intended to reduce the friction between the software development (Dev) and the IT operations (Ops), resulting in higher quality software and a shorter development lifecycle. Even though many resources are talking about DevOps practices, they are often inconsistent with each other on the best DevOps practices. Furthermore, they lack the needed detail and structure for beginners to the DevOps field to quickly understand them.

In order to tackle this issue, this paper proposes four foundational DevOps patterns: Version Control Everything, Continuous Integration, Deployment Automation, and Monitoring. The patterns are both detailed enough and structured to be easily reused by practitioners and flexible enough to accommodate different needs and quirks that might arise from their actual usage context. Furthermore, the patterns are tuned to the DevOps principle of Continuous Improvement by containing metrics so that practitioners can improve their pattern implementations.


The article does not describes but actually identified and included 2 other patterns in addition to the four above (so actually 6):

  • Cloud Infrastructure, which includes cloud computing, scaling, infrastructure as a code, ...
  • Pipeline, "important for implementing Deployment Automation and Continuous Integration, and segregating it from the others allows us to make the solutions of these patterns easier to use, namely in contexts where a pipeline does not need to be present."

Overview of the pattern candidates and their relation

The paper is interesting for the following structure in describing the patterns:

  • Name: An evocative name for the pattern.
  • Context: Contains the context for the pattern providing a background for the problem.
  • Problem: A question representing the problem that the pattern intends to solve.
  • Forces: A list of forces that the solution must balance out.
  • Solution: A detailed description of the solution for our pattern’s problem.
  • Consequences: The implications, advantages and trade-offs caused by using the pattern.
  • Related Patterns: Patterns which are connected somehow to the one being described.
  • Metrics: A set of metrics to measure the effectiveness of the pattern’s solution implementation.
 

cross-posted from: https://group.lt/post/46053

A group of astronomers poring over data from the James Webb Space Telescope (JWST) has glimpsed light from ionized helium in a distant galaxy, which could indicate the presence of the universe’s very first generation of stars.

These long-sought, inaptly named “Population III” stars would have been ginormous balls of hydrogen and helium sculpted from the universe’s primordial gas. Theorists started imagining these first fireballs in the 1970s, hypothesizing that, after short lifetimes, they exploded as supernovas, forging heavier elements and spewing them into the cosmos. That star stuff later gave rise to Population II stars more abundant in heavy elements, then even richer Population I stars like our sun, as well as planets, asteroids, comets and eventually life itself.

About 400,000 years after the Big Bang, electrons, protons and neutrons settled down enough to combine into hydrogen and helium atoms. As the temperature kept dropping, dark matter gradually clumped up, pulling the atoms with it. Inside the clumps, hydrogen and helium were squashed by gravity, condensing into enormous balls of gas until, once the balls were dense enough, nuclear fusion suddenly ignited in their centers. The first stars were born.

stars in our galaxy into types I and II in 1944. The former includes our sun and other metal-rich stars; the latter contains older stars made of lighter elements. The idea of Population III stars entered the literature decades later... Their heat or explosions could have reionized the universe

A color-composite NIRCam image of the RXJ2129 galaxy cluster.

More information:

 

cross-posted from c/softwareengineering@group.lt: https://group.lt/post/44632

This kind of scaling issue is new to Codeberg (a nonprofit free software project), but not to the world. All projects on earth likely went through this at a certain point or will experience it in the future.

When people like me talk about scaling... It's about increasing computing power, distributed storage, replicated databases and so on. There are all kinds of technology available to solve scaling issues. So why, damn, is Codeberg still having performance issues from time to time?

...we face the "worst" kind of scaling issue in my perception. That is, if you don't see it coming (e.g. because the software gets slower day by day, or because you see how the storage pool fill up). Instead, it appears out of the blue.

The hardest scaling issue is: scaling human power.

Configuration, Investigation, Maintenance, User Support, Communication – all require some effort, and it's not easy to automate. In many cases, automation would consume even more human resources to set up than we have.

There are no paid night shifts, not even payment at all. Still, people have become used to the always-available guarantees, and demand the same from us: Occasional slowness in the evening of the CET timezone? Unbearable!

I do understand the demand. We definitely aim for a better service than we sometimes provide. However, sometimes, the frustration of angry social-media-guys carries me away...

two primary blockers that prevent scaling human resources. The first one is: trust. Because we can't yet afford hiring employees that work on tasks for a defined amount of time, work naturally has to be distributed over many volunteers with limited time commitment... second problem is a in part technical. Unlike major players, which have nearly unlimited resources available to meet high demand, scaling Codeberg's systems...

TLDR: sustainability issues for scaling because Codeberg is a nonprofit with much limited resources, mainly human resources, in face of high demand. Non-paid volunteers do all the work. So needs more people working as volunteers, and needs more money.

 

cross-posted from c/softwareengineering@group.lt: https://group.lt/post/44632

This kind of scaling issue is new to Codeberg (a nonprofit free software project), but not to the world. All projects on earth likely went through this at a certain point or will experience it in the future.

When people like me talk about scaling... It's about increasing computing power, distributed storage, replicated databases and so on. There are all kinds of technology available to solve scaling issues. So why, damn, is Codeberg still having performance issues from time to time?

...we face the "worst" kind of scaling issue in my perception. That is, if you don't see it coming (e.g. because the software gets slower day by day, or because you see how the storage pool fill up). Instead, it appears out of the blue.

The hardest scaling issue is: scaling human power.

Configuration, Investigation, Maintenance, User Support, Communication – all require some effort, and it's not easy to automate. In many cases, automation would consume even more human resources to set up than we have.

There are no paid night shifts, not even payment at all. Still, people have become used to the always-available guarantees, and demand the same from us: Occasional slowness in the evening of the CET timezone? Unbearable!

I do understand the demand. We definitely aim for a better service than we sometimes provide. However, sometimes, the frustration of angry social-media-guys carries me away...

two primary blockers that prevent scaling human resources. The first one is: trust. Because we can't yet afford hiring employees that work on tasks for a defined amount of time, work naturally has to be distributed over many volunteers with limited time commitment... second problem is a in part technical. Unlike major players, which have nearly unlimited resources available to meet high demand, scaling Codeberg's systems...

TLDR: sustainability issues for scaling because Codeberg is a nonprofit with much limited resources, mainly human resources, in face of high demand. Non-paid volunteers do all the work. So needs more people working as volunteers, and needs more money.

 

cross-posted from: https://group.lt/post/30446

1652 contributors, who authored 30371 commits since the previous release.

NixOS is already known as the most up to date distribution while also being the distribution with the most packages.

This release saw 16678 new packages and 14680 updated packages in nixpkgs. We also removed 2812 packages in an effort to keep the package set maintainable and secure. In addition to packages the NixOS distribution also features modules and tests that make it what it is. This release brought 91 new modules and removed 20. In that process we added 1322 options and removed 487.