this post was submitted on 06 Nov 2023
123 points (100.0% liked)

World News

22058 readers
50 users here now

Breaking news from around the world.

News that is American but has an international facet may also be posted here.


Guidelines for submissions:

These guidelines will be enforced on a know-it-when-I-see-it basis.


For US News, see the US News community.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] GunnarRunnar@beehaw.org 20 points 1 year ago (4 children)

Why does it matter what the excuse is?

You shouldn't get a stereotype (or in this case I suppose propaganda?) when you give a neutral prompt.

[–] DarkGamer@kbin.social 10 points 1 year ago* (last edited 1 year ago) (3 children)

You shouldn’t get a stereotype (or in this case I suppose propaganda?) when you give a neutral prompt.

What I'm hearing is, "AI art shouldn't reflect reality." If this agent is repeating propaganda, it's propaganda that Palestinian kindergartens have been creating and putting out there on their own:

A West Bank kindergarten [Al-Tofula Kindergarten] has published videos showing children pretending to perform military drills with toy guns, clashing with and killing Israeli soldiers, and holding a mock funeral for a child who is killed and becomes a “martyr.” source

At the graduation ceremony of the Al-Hoda kindergarten in Gaza, pre-schoolers carrying mock guns and rifles simulated Islamic Jihad militants storming an Israeli building on "Al-Quds Street," capturing a child dressed in stereotypical garb as an Orthodox Jew and killing an "Israeli soldier." To the sounds of loud explosions and gunfire, the children, dressed in uniforms of the Islamic Jihad’s Al-Quds Brigades, attacked the building, placing a sign reading "Israel has fallen" in Hebrew and Arabic on the back of the "soldier," who lies prone on the ground, and leaving the stage with their "hostage." source

[–] GunnarRunnar@beehaw.org 9 points 1 year ago

WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’

By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’

So what reality is this model reflecting then?

[–] Pips@lemmy.sdf.org 7 points 1 year ago (2 children)

If you're going to make that claim, perhaps cite to a source isn't run by former Israeli intelligence that creates a lot of propaganda and has been doing so for decades.

[–] DarkGamer@kbin.social 3 points 1 year ago* (last edited 1 year ago) (1 children)

I don't trust MEMRI translations, but there is no translation needed to understand what is happening in the above footage. I'm interested in any sources that dispute the authenticity of the above, which your link does not. If you provide a credible one I will edit my post accordingly. It seems to me that this is very real.

[–] Pips@lemmy.sdf.org 4 points 1 year ago* (last edited 1 year ago) (1 children)

That's fair. Without getting too in the weeds on the issue, apparently the video is authentic and it's something Israelis do as well, so isn't really telling about either side of the conflict except to note that extremists will use children to push their views anywhere.

[–] DarkGamer@kbin.social 3 points 1 year ago* (last edited 1 year ago)

I wasn't aware of that, thanks for the link. It would be interesting to know how prevalent indoctrination/militarization of youth is in each of these nations. It can be hard to accurately judge magnitude in this conflict, it is so heavily propagandized.

[–] PerogiBoi@lemmy.ca 2 points 1 year ago (2 children)

There is absolutely no amount of data that could convince you otherwise. You’ve made it very clear you’ve made up your mind.

[–] DarkGamer@kbin.social 4 points 1 year ago* (last edited 1 year ago)

Maybe try presenting some rather than complaining about what you imagine I'd do, random internet stranger.

[–] Pips@lemmy.sdf.org 1 points 1 year ago

Oh is that why I followed up by saying the video is probably authentic?

[–] kbal@fedia.io 5 points 1 year ago

Somehow I get the feeling that equating "reality" with "propaganda created by kindergartens" is the rhetorical equivalent of dividing by zero.

[–] jarfil@beehaw.org 2 points 1 year ago

You shouldn't get a stereotype [...] when you give a neutral prompt.

Actually... you kind of should. A neutral prompt should provide the most commonly appearing match from the training set... which is basically what stereotypes are; an abstraction from the most commonly appearing match from a person's experience.

[–] sqgl@beehaw.org 2 points 1 year ago

Should, would, could. AI is trained on what it scrapes off the internet. It is only feeding the Augmented Idiocy which is already a problem.

[–] HappyMeatbag@beehaw.org 1 points 1 year ago (1 children)

To me, it should only “matter” for technical reasons - to help find the root of the problem and fix it at the source. If your roof is leaking, then fix the roof. Don’t become an expert on where to place the buckets.

You’re right, though. It doesn’t matter in terms of excusing or justifying anything. It shouldn’t have been allowed to happen in the first place.

[–] GunnarRunnar@beehaw.org 2 points 1 year ago (1 children)

I do agree that technical mistakes are interesting but with AI the answer seems to always be creator bias. Whether it's incomplete training sets or (one-sidedly) moderated results, it doesn't really matter. It pushes the narrative to certain direction, and people trust AIs to be impartial because they presume it's just a machine that interprets reality when it never is.

[–] jarfil@beehaw.org 2 points 1 year ago (1 children)

it's just a machine that interprets reality

...as seen by the machine.

It's amazing how easily people seem to forget that last part; they wouldn't trust a person to be perfectly impartial, but somehow they expect an AI to be.

[–] GunnarRunnar@beehaw.org 3 points 1 year ago

It's amazing how easily people seem to forget that machines uses tools its creator provides. You can't trust AI to be impartial because it never is as it is a collection of multiple choices made by people.

This is such a bore, having this same conversation over and over. Same thing happened with NFTs and whatever is currently at the height of its tech hype cycle. Don't buy into the hype and realize both AIs potential and shortcomings.