this post was submitted on 11 Mar 2024
16 points (83.3% liked)

Fediverse

17722 readers
1 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 5 years ago
MODERATORS
 

Seven years later, Kyle’s argument is that AirSpace has turned into what he now calls Filterworld, a phrase he uses to describe how algorithmic recommendations have become one of the most dominating forces in culture, and as a result, have pushed society to converge on a kind of soulless sameness in its tastes.

you are viewing a single comment's thread
view the rest of the comments
[–] souperk@reddthat.com 3 points 8 months ago* (last edited 8 months ago) (2 children)

IMO it's never about the tool, but who controls it. For example, nuclear energy is a neutral thing on its own, when used to generate power it's (arguably) a net positive, when used for bombing it's a net negative.

The same goes for algorithms, when they are used to save lives at hospitals it's a net positive, when used to harvest people's attention it becomes a net negative.

(For anyone interested, I have MAB algorithms in mind, they can be used to prioritize patients at hospitals, or make recommendations in social media. You can guess which application of the algorithm is more commonly used, well researched, and well funded.)

[–] pelespirit@sh.itjust.works 1 points 8 months ago

IMO it’s never about the tool, but who controls it

I 100% agree, it's extremely powerful and covert though, the hospitals could be using it for both good and bad as well.

[–] rrwo@floss.social 0 points 8 months ago (1 children)

@souperk @pelespirit

> For example, nuclear energy is a neutral thing on its own, when used to generate power it’s (arguably) a net positive...

It's more complicated than that.

Mining uranium has side effects, usually for poorer communities.

The fuel has to handled safety, as well a the waste which to be safely stored for 1000s of years.

Nuclear plants have to be designed and built well.

The most benign democracies have made made a mess of those issues.

1/n

[–] rrwo@floss.social 0 points 8 months ago (1 children)

@souperk @pelespirit

> The same goes for algorithms, when they are used to save lives at hospitals it’s a net positive

Again, more complicated.

Are the algorithms mathematically sound, or just AI/machine learning magic fairy dust?

Do the algorithms have implicit biases against poor people, or those with darker skin or who live in certain postcodes?

2/n

[–] souperk@reddthat.com 1 points 8 months ago* (last edited 8 months ago)

Again, more complicated.

It doesn't have to be.

Are the algorithms mathematically sound, or just AI/machine learning magic fairy dust?

MAB algorithms lie in middle. They are a mathematically sound way to explore the unknown and make reasonable decisions given whatever context is available.

There have been a few hospital trials with success, but progress is slow and funding is low. There are a few really interesting papers if you are interested to read more.

Do the algorithms have implicit biases against poor people, or those with darker skin or who live in certain postcodes?

In a sense, it's not different than laws that discriminate against people of color or other marginalized communities. The fact that a bunch of super privileged lawmakers create laws that disproportionately harm us, does not mean that the concept of law is flawed.

You got to ask yourself why the algorithm was given that information in the first place, and more importantly who gave it?

What we call algorithm, is actually two things. A set of instructions (the actual algorithm) and a set of parameters. The instructions explain how to use those parameters in order to make a decision. The parameters may or may not be biased, it all depends on the process that is used to generate those parameters.

AI in particular uses a process called training, in which people make decisions, and another algorithm is used to adjust the parameters so those decisions can be genralized and repeated by the AI. When, biased people make biased decisions, they are going to train an AI to make biased decisions.

Unfortunately, that's our reality, biased people make biased decisions, as a result we have biased laws and biased algorithms.

By the way, this is what the author calls algorithm cleanse, and it's bureaucracy supercharged. Why hire someone to reject applicants of color when you can build an algorithm to do that? Making a legal case against that is much harder, and the legal system isn't ready to understand the nuisances of the case.

However, in contrast to the laws, we marginalized people can create our own "algorithms", thay are not biased to our best effort. The fediverse is living proof of this. Why fight the system when we can make our own?