this post was submitted on 04 Aug 2023
34 points (100.0% liked)

Technology

37712 readers
280 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.ml/post/2811405

"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "

you are viewing a single comment's thread
view the rest of the comments
[–] lloram239@feddit.de 10 points 1 year ago

That drivel reads like something a chatbot would write or maybe not, since that would probably more informative and creative. Just repeats the same old nonsense that went through the press a dozen times and is seriously lacking in substance.

the harms of generative AI, including racism, sexism

Has this person even bothered to touch any of the popular LLMs? They are all censored to the n'th degree. Half the time they'll refuse to summarize StarWars. Whole groups of topics are completely impossible to discuss with an LLM. Yes, you can run a local LLM with some of the censorship removed or custom train one, but that's a lot of extra effort. The models by default are extremely neutered.

For example, when a lawyer requested precedent case law, the tool made up cases out of whole cloth.

Lawyer uses tool for task tool was never designed for and is fundamentally unable to be good at by design. Surprised Pikachu face

In reality, these systems do not understand anything.

Utterly baseless claims without even an example are my favorite.

In another act of resistance, artists and computer scientists created Glaze,

And promotes snake oil.