[-] Balinares@pawb.social 4 points 2 weeks ago

The default actually works pretty well these days.

Messing with the EFI partition, for instance by attempting to have two of those on separate disks, will probably cause you more pain than Windows will. As far as I understand, only one EFI partition can be configured in BIOS as the boot partition, so you will have to change the configuration in BIOS whenever you want to boot to the other OS.

Windows does have a history of changing the default EFI bootloader once in a while; however your chosen bootloader is still there, just not marked as the default anymore. A Windows app like EasyUEFI will let you change the default back.

[-] Balinares@pawb.social 9 points 2 weeks ago

The ONE time in half a decade I take a trip to Seattle...

"Possible cyberattack" plus "no threat actors or ransomware group has taken responsibility" sounds to me like someone fucked up and is timid about owning up.

[-] Balinares@pawb.social 8 points 2 weeks ago

She's pretty and deserves neck scritches. :) Also needs to see a farrier.

[-] Balinares@pawb.social 20 points 3 weeks ago

Windows 98 really sucked and running Unix at home became an option.

[-] Balinares@pawb.social 17 points 1 month ago

Was this a mistake?

Clarifying: are you asking if downloading the Proton Mail app through the Google Play Store gives Google access to your Proton account? If so, the answer is no.

[-] Balinares@pawb.social 20 points 1 month ago

All labels are imperfect, I guess. That's the nature of labels: a shorthand for a complex reality.

I don't know if the "trans" label is or isn't a good shorthand for the complex reality of your identity. But the important thing is: your identity is valid and yours, regardless of what labels you stick on it.

If you feel that you are a woman, be that partially or completely, then congratulations, girl, there you go. Or maybe what you feel like switches back and forth depending on your mood, or maybe you exist somewhere in the middle. That's valid too. There are other labels worth exploring in that space, non-binary, genderfluid... I suppose the only really useful thing here is to work out which ones resonate with you as a suitable shorthand for who you are.

Oh and who you are attracted to is irrelevant. Lots of trans gals are lesbians. Doesn't make them any less trans.

[-] Balinares@pawb.social 10 points 1 month ago

Firefox's stance on privacy, like Apple's, is to some extent branding. Arguably it always was. You should still use Firefox (or any other third party browser) if it works for you. Ecosystem diversity matters.

[-] Balinares@pawb.social 13 points 1 month ago

They didn't drop the don't be evil thing. It's still right there in the code of conduct where it always was, they just moved it to the conclusion of the document so it's the last thing that remains with you. See for yourself: https://abc.xyz/investor/google-code-of-conduct/

The supposed removal is a perfect example of the outrage-bait headlines I'm discussing in another comment.

[-] Balinares@pawb.social 5 points 1 month ago

It's not the company it once was, but there are also a lot of outrage-bait headlines about it that don't hold up well to scrutiny.

For instance, there have been a lot of Lemmy posts about Chrome supposedly removing the APIs used by adblockers. I figured I'd validate that on my own by switching to the version of uBlock that is based on the new API. Well... As it turns out, it works fine. It's also faster.

Mind you, figuring out the actual facts behind each post gets exhausting, and people just shutting down and avoiding the problem space entirely makes some sort of sense. That, and it is healthy for an ecosystem to have alternatives, so I'd keep encouraging usage of Firefox and such if only on that basis alone.

[-] Balinares@pawb.social 13 points 1 month ago

This is actually an excellent question.

And for all the discussions on the topic in the last 24h, the answer is: until a postmortem is published, we don't actually know.

There are a lot of possible explanations for the observed events. Of course, one simple and very easy to believe explanation would be that the software quality processes and reliability engineering at CrowdStrike are simply below industry standards -- if we're going to be speculating for entertainment purposes, you can in fact imagine them to be as comically bad as you please, no one can stop you.

But as a general rule of thumb, I'd be leery of simple and easy to believe explanations. Of all the (non-CrowdStrike!) headline-making Internet infrastructure outages I've been personally privy to, and that were speculated about on such places as Reddit or Lemmy, not one of the commenter speculations came close to the actual, and often fantastically complex chain of events involved in the outage. (Which, for mysterious reasons, did not seem to keep the commenters from speaking with unwavering confidence.)

Regarding testing: testing buys you a certain necessary degree of confidence in the robustness of the software. But this degree of confidence will never be 100%, because in all sufficiently complex systems there will be unknown unknowns. Even if your test coverage is 100% -- every single instruction of the code is exercised by at least one test -- you can't be certain that every test accurately models the production environments that the software will be encountering. Furthermore, even exercising every single instruction is not sufficient protection on its own: the code might for instance fail in rare circumstances not covered by the test's inputs.

For these reasons, one common best practice is to assume that the software will sooner or later ship with an undetected fault, and to therefore only deploy updates -- both of software and of configuration data -- in a staggered manner. The process looks something like this: a small subset of endpoints are selected for the update, the update is left to run in these endpoints for a certain amount of time, and the selected endpoints' metrics are then assessed for unexpected behavior. Then you repeat this process for a larger subset of endpoints, and so on until the update has been deployed globally. The early subsets are sometimes called "canary", as in the expression "canary in a coal mine".

Why such a staggered deployment did not appear to occur in the CrowdStrike outage is the unanswered question I'm most curious about. But, to give you an idea of the sort of stuff that may happen in general, here is a selection of plausible scenarios, some of which have been known to occur in the wild in some shape or form:

  • The update is considered low-risk (for instance, it's a minor configuration change without any code change) and there's an imperious reason to expedite the deployment, for instance if it addresses a zero-day vulnerability under active exploitation by adversaries.
  • The update activates a feature that an important customer wants now, the customer phoned a VP to express such, and the VP then asks the engineers, arbitrarily loudly, to expedite the deployment.
  • The staggered deployment did in fact occur, but the issue takes the form of what is colloquially called a time bomb, where it is only triggered later on by a change in the state of production environments, such as, typically, the passage of time. Time bomb issues are the nightmare of reliability engineers, and difficult to defend against. They are also, thankfully, fairly rare.
  • A chain of events resulting in a misconfiguration where all the endpoints, instead of only those selected as canaries, pull the update.
  • Reliabilty engineering not being up to industry standards.

Of course, not all of the above fit the currently known (or, really, believed-known) details of the CrowdStrike outage. It is, in fact, unlikely that the chain of events that resulted in the CrowdStrike outage will be found in a random comment on Reddit or Lemmy. But hopefully this sheds a small amount of light on your excellent question.

[-] Balinares@pawb.social 1 points 1 month ago

One funny thing about humans is that they aren't just gloriously fallible: they also get quite upset when that's pointed out. :)

Unfortunately, that's also how you end up with blameful company cultures that actively make reliability worse, because then your humans make just the same amounts of mistakes, but they hide them -- and you never get a chance to evolve your systems with the safeguards that would have prevented these.

view more: next ›

Balinares

joined 1 year ago