this post was submitted on 29 Jan 2024
34 points (100.0% liked)

Technology

37716 readers
363 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Generative artificial intelligence (GenAI) company Anthropic has claimed to a US court that using copyrighted content in large language model (LLM) training data counts as “fair use”, however.

Under US law, “fair use” permits the limited use of copyrighted material without permission, for purposes such as criticism, news reporting, teaching, and research.

In October 2023, a host of music publishers including Concord, Universal Music Group and ABKCO initiated legal action against the Amazon- and Google-backed generative AI firm Anthropic, demanding potentially millions in damages for the allegedly “systematic and widespread infringement of their copyrighted song lyrics”.

(page 2) 29 comments
sorted by: hot top controversial new old
[–] intensely_human@lemm.ee 2 points 9 months ago

Yup. Same as the way the rest of use and learn from the internet. We basically wouldn’t have the internet as we know it if it weren’t 99% free content.

[–] EmergMemeHologram@startrek.website 1 points 9 months ago* (last edited 9 months ago)

Google and Amazon both have massive corpuses of this data that they would allow only themselves to use.

Anthropic isn’t saying this to help content creators, they’re saying this to kill OpenAI so they don’t have to actually compete

[–] lvxferre@mander.xyz 1 points 9 months ago (2 children)

Most things that I could talk about were already addressed by other users (specially @OttoVonNoob@lemmy.ca), so I'll address a specific point - better models would skip this issue altogether.

The current models are extremely inefficient on their usage of training data. LLMs are a good example; Claude v2.1 was allegedly trained on hundreds of billions of words. In the meantime, it's claimed that a 4yo child hears something between 45 millions and 13 millions words through their still short life. It's four orders of magnitude of difference, so even if someone claims that those bots are as smart as a 4yo*, they're still chewing through the training data without using it efficiently.

Once this is solved, the corpus size will get way, way smaller. Then it would be rather feasible to train those models without offending the precious desire for greed of the American media mafia, in a way that still fulfils the entitlement of the GAFAM mafia.

*I seriously doubt that, but I can't be arsed to argue this here - it's a drop in a bucket.

load more comments (2 replies)
[–] argo_yamato@lemm.ee 1 points 9 months ago* (last edited 9 months ago)

Didn't read the article but boo-fucking-hoo. Pay the content creators.

load more comments
view more: ‹ prev next ›