this post was submitted on 13 Aug 2024
7 points (100.0% liked)

Arch Linux

7759 readers
2 users here now

The beloved lightweight distro

founded 4 years ago
MODERATORS
 

My config for reflector is currently set as follows:

# Set the output path where the mirrorlist will be saved.
--save /etc/pacman.d/mirrorlist

# Select the transfer protocol.
--protocol https

# Use only the most recently synchronized mirrors.
--latest 200

# Sort the mirrors by synchronization time.
--sort rate

# Return, at most, the following number of mirrors.
--number 20

# Print extra info.
--verbose

I have Reflector set to run as a Systemd service, so it will run when my computer boots.

The "issue" is that I update my system as soon as I boot. Since Reflector is sorting mirrors by their measured download rate, I wonder if downloading updates, or simply doing any action that downloads data, would interfere with those measurements and cause Reflector to choose mirrors that may not be the fastest. I could simply wait for Reflector to finish before using the computer, but it takes quite a while to sort through 200 mirrors.

Is this concern justified? If so, are there ways to mitigate it that don't require me to wait for Reflector to finish? I've thought about setting it as a Pacman hook so that it runs after updating, but, then, that relies on me to perform an update for the mirrorlist to be refreshed, and that still leaves the concern of other actions eating up network bandwidth, thus skewing the measurements.

you are viewing a single comment's thread
view the rest of the comments
[–] nous@programming.dev 6 points 3 months ago (1 children)

No, and IMO you should not. It causes extra stress on the mirrors. If everyone did it every day that would be a significant load for very little gain on the end users side. The mirrors speeds don't change that often to need to worry about always being on the absolute fastest.

Especially if you are updating the the background anyway, what does it matter if you end up on a slightly slower mirror for a bit?

[–] Kalcifer@sh.itjust.works 1 points 2 months ago (1 children)

If everyone did it every day that would be a significant load

Given that I update daily, I feel that the quick connection to the server to test it's bandwidth at boot is rather insignificant.


The mirrors speeds don’t change that often to need to worry about always being on the absolute fastest.

Have there been any credible studies that have looked at the reliability of the mirrors? The reliability would give one an idea on how often they should refresh their mirrors.


Especially if you are updating the the background anyway

You're updating in the background on Arch Linux?

[–] nous@programming.dev 3 points 2 months ago (1 children)

Given that I update daily, I feel that the quick connection to the server to test it’s bandwidth at boot is rather insignificant.

But it is not just a quick connection. Speed tests, in order to be accurate, need to download a reasonable amount from each server. This is why:

it takes quite a while to sort through 200 mirrors.

Have there been any credible studies that have looked at the reliability of the mirrors? The reliability would give one an idea on how often they should refresh their mirrors.

You dont need one. If a mirror becomes unreliable then you can run reflector again to fix the issue. There is no need to constantly run it. And you dont need to be on the absolute fastest mirror every day. You will never notice the difference between the fastest one yesterday and the fastest one today - assuming there are no major problems with it. And if there are that is when you run reflector again.

And reflector already comes with a weekly timer and service that is plenty often enough.

[–] Kalcifer@sh.itjust.works 0 points 2 months ago

Speed tests, in order to be accurate, need to download a reasonable amount from each server.

How much data does Reflector download for each test?


This is why:

it takes quite a while to sort through 200 mirrors.

It could simply be that Reflector isn't overly efficient handling back-to-back tests. Perhaps there is a substantial idle period between tests that is eating up a large chunk of the total test time. Anecdotally, I have seen activity that suggests this in my network activity monitor — there are very short spikes and a comparatively long idle period in between.


You dont need one.

If one doesn't want to make arbitrary decisions then yes evidence would be required.


You will never notice the difference between the fastest one yesterday and the fastest one today

Lost time is still lost time. I'd prefer to saturate my connection. Anything less is an inefficiency. Small losses in time add up.