this post was submitted on 10 Jul 2023
10 points (81.2% liked)

Technology

59223 readers
3154 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Which of the following sounds more reasonable?

  • I shouldn't have to pay for the content that I use to tune my LLM model and algorithm.

  • We shouldn't have to pay for the content we use to train and teach an AI.

By calling it AI, the corporations are able to advocate for a position that's blatantly pro corporate and anti writer/artist, and trick people into supporting it under the guise of a technological development.

top 44 comments
sorted by: hot top controversial new old
[–] aezart@lemmy.world 2 points 1 year ago (1 children)

If an LLM was trained on a single page of GPL code or a single piece of CC-BY art, the entire set of model weights and any outputs from the model must be licensed the same way. Otherwise this whole thing is just blatant license laundering.

[–] paperbenni@lemmy.world 2 points 1 year ago

This depends on how transformative the act of encoding the data in an LLM is. If you have overfitting out the ass and the model can recite its training material verbatim then it's an illegal copy of the training material. If the model can only output content that would be considered transformative if a human with knowledge of the training data created it, then so is the model.

[–] eerongal@ttrpg.network 2 points 1 year ago (1 children)

I'm not sure what you're trying to say here; LLMs are absolutely under the umbrella of AI, they are 100% a form of AI. They are not AGI/STRONG AI, but they are absolutely a form of AI. There's no "reframing" necessary.

No matter how you frame it, though, there's always going to be a battle between the entities that want to use a large amount of data for profit (corporations) and the people who produce said content.

[–] Silinde@lemmy.world 1 points 1 year ago (1 children)

True, and this is the annoying thing about people unqualified to talk about AI giving their opinions online. People not involved in the industry hear "AI" and expect HAL-9000 or Ava from Ex Machina rather than the software that the weather service uses to predict if it will rain tomorrow, or the models your doctor uses to help determine your risk of Heart Disease.

This is compounded further when someone makes a video simplifying what an LLM is and mentioning that the latest models use it, which leads to the chimes of "bUt iT'S jUsT aN Llm BrO iTs nOt AI" and "ItS jUsT a LOaD oF DaTa aND aLGorItHMs, tHaTs NoT AI". A little bit of knowledge is a dangerous thing.

[–] jumperalex@lemmy.world 1 points 1 year ago

or that people are only exposed to trivial/childish publicly available examples.

[–] pensivepangolin@lemmy.world 2 points 1 year ago (5 children)

I think it’s the same reason the CEO’s of these corporations are clamoring about their own products being doomsday devices: it gives them massive power over crafting regulatory policy, thus letting them make sure it’s favorable to their business interests.

Even more frustrating when you realize, and feel free to correct me if I’m wrong, these new “AI” programs and LLMs aren’t really novel in terms of theoretical approach: the real revolution is the amount of computing power and data to throw at them.

[–] assassin_aragorn@lemmy.world 2 points 1 year ago (3 children)

The funniest thing I've seen on this is the ChatGPT CEO, Altman, talking about how he's a bit afraid of what they've created and how it needs limitations -- and then when the EU begins to look at regulations, he immediately rejects the concept, to the point of threatening to leave the European market. It's incredibly transparent what they're doing.

Unfortunately I don't know enough about the technology to say if the algorithms and concepts themselves are novel, but without a doubt they couldn't exist without modern computing power capabilities.

[–] FancyGUI@lemmy.fancywhale.ca 2 points 1 year ago* (last edited 1 year ago)

I can tell for a fact that there's nothing new going on. Only the MASSIVE investment from Microsoft to allow them to train on an insane amount of data. I am no "expert" per se, but I've been studying and working with AI for over a decade - so feel free to judge my reply as you please

[–] Peruvian_Skies@kbin.social 2 points 1 year ago (1 children)

The concepts themselves are some 30 years old, but storage capacity and processing speed have only recently reached a point where generative AI outperforms competing solutions.

But regarding the regulation thing, I don't know what was said or proposed, and this is just me playing devil's advocate: but could it be that the CEO simply doesn't agree with the specifics of the proposed regulations while still believing that some other, different kind of regulation should exist?

[–] rainh@kbin.social 4 points 1 year ago (1 children)

Certainly could be, but probably an optimistic take. Most likely they're just trying to do what corporations have been doing for ages, which is to weaponize government policy to prevent competition. They don't want restrictions that will materially impact their product, they want restrictions that will materially impact startups to make it more difficult for them to intrude on the established space.

[–] jumperalex@lemmy.world 2 points 1 year ago

I think if you fed your response into ChatGPT and asked it to summarize in two words it would return,

"Regulatory Capture"

[–] MxM111@kbin.social 0 points 1 year ago (1 children)

And what are they doing? To remind, OpenAI is non-profit.

[–] Starfarer@kbin.social 2 points 1 year ago (1 children)

I thought they moved to for profit back in 2019?

[–] MxM111@kbin.social -2 points 1 year ago (1 children)
[–] BetaDoggo_@lemmy.world 1 points 1 year ago

They're a non-profit managed by a for-profit, who's received most of their funding from another for-proft.

[–] ywein@lemmy.ml 2 points 1 year ago

LLMs are pretty novel. They are made possible by invention of the Transformer model, that operates significantly different compared to, say, RNN.

[–] Phantom_Engineer@lemmy.ml 1 points 1 year ago

The fear mongering is pretty ridiculous.

"AI could DESTROY HUMANITY. It's like the ATOMIC BOMB! Look at it's RAW POWER!"

AI generates an image of cats playing canasta.

"By God...."

[–] assassinatedbyCIA@lemmy.world 1 points 1 year ago (1 children)

It also plays into the hype cycle they’re trying to create. Saying you’ve made an AI is more likely to capture the attention of the masses then saying you have a LLM. Ditto that point for the existential doomerism that they ceo’s have. Saying your tech is so powerful that it might lead to humanity’s extinction does wonders in building hype.

[–] pensivepangolin@lemmy.world 1 points 1 year ago

Agreed. And all you really need to do is browse any of the headlines from even respectable news outlets to see how well it’s working. It’s just article after article uncritically parroting whatever claims these CEO’s make at face value at least 50% of the time. It’s mind-numbing.

[–] eerongal@ttrpg.network 0 points 1 year ago (3 children)

Even more frustrating when you realize, and feel free to correct me if I’m wrong, these new “AI” programs and LLMs aren’t really novel in terms of theoretical approach: the real revolution is the amount of computing power and data to throw at them.

This is 100% true. LLMs, neural networks, markov chains, gradient descent, etc. etc. on down the line is nothing particularly new. They've collectively been studied academically for 30+ years. It's only recently that we've been able to throw huge amounts of data, computing capacity, and time to tweak said models to achieve results unthinkable 10-ish years ago.

There have been efficiencies, breakthroughs, tweaks, and changes over this time too, but that's just to be expected. But largely its just sheer raw size/scale that's just been achievable recently.

[–] pensivepangolin@lemmy.world 1 points 1 year ago

Okay, I’m glad I’m not too far off the mark then (I’m not an AI expert/it’s not my field of study).

I think this also points to/is a great example of another worrying trend: the consolidation of computing power in the hands of a few large companies. Without even factoring in the development of true AI/whether that can or will happen anytime soon, the LLMs really show off the massive scale of both computational power consolidation AMD data harvesting by only a very few entities. I’m guessing I’m not alone here in finding that increasingly concerning, particularly since a lot of development is driving towards surveillance applications.

[–] jumperalex@lemmy.world 0 points 1 year ago

by that logic there was nothing novel about solid state transistors since they just did the same thing as vacuum tubes; no innovation there I guess. No new ideas came from finally having a way to pack cooler, less power hungry, smaller components together.

[–] FunnyUsername@lemmy.world -2 points 1 year ago (1 children)

We all remember SmarterChild....right?

[–] MercuryUprising@lemmy.world 1 points 1 year ago

I remember Tay

[–] itsnotlupus@lemmy.world 1 points 1 year ago (1 children)

I'll note that there are plenty of models out there that aren't LLMs and that are also being trained on large datasets gathered from public sources.

Image generation models, music generation models, etc.
Heck, it doesn't even need to be about generation. Music recognition and image recognition models can also be trained on the same sort of datasets, and arguably come with similar IP right questions.

It's definitely a broader topic than just LLMs, and attempting to enumerate exhaustively the flavors of AIs/models/whatever that should be part of this discussion is fairly futile given the fast evolving nature of the field.

[–] themarty27@lemmy.sdf.org 0 points 1 year ago* (last edited 1 year ago) (2 children)

Still, all those models are, even conceptually, far removed frow AI. They would most properly be called Machine Learning Models (MLMs).

[–] itsnotlupus@lemmy.world 2 points 1 year ago

The term AI was coined many decades ago to encompass a broad set of difficult problems, many of which have become less difficult over time.

There's a natural temptation to remove solved problems from the set of AI problems, so playing chess is no longer AI, diagnosing diseases through a set of expert system rules is no longer AI, processing natural language is no longer AI, and maybe training and using large models is no longer AI nowadays.

Maybe we do this because we view intelligence as a fundamentally magical property, and anything that has been fully described has necessarily lost all its magic in the process.
But that means that "AI" can never be used to label anything that actually exists, only to gesture broadly at the horizon of what might come.

[–] gammasfor@sh.itjust.works -1 points 1 year ago

They would but that doesn'tv sound as sexy to investors.

That's what it all comes down to when businesses use words like AI, big data, blockchain etc. Its not about whether it's an accurate descriptor, its about tricking dumb millionaires into throwing money at them.

[–] Iceblade02@lemmy.world 1 points 1 year ago (1 children)

IMO content created by either AI or LLMs should have a special license and be considered AI public domain (unless they can prove that they own all content the AI was trained on). Commercial content made based on content marked with this license would be subject to a flat % tax that should be applied to the product price which would be earmarked for a fund distributing to human creators (coders, writers, musicians etc.).

[–] kklusz@lemmy.world 2 points 1 year ago (1 children)

What about LLM generated content that was then edited by a human? Surely authors shouldn't lose copyright over an entire book just because they enlisted the help of LLMs for the first draft.

[–] Cethin@lemmy.zip 1 points 1 year ago

If you take open source code using GNU GPL and modify it, it retains the GNU GPL license. It's like saying it's fine to take a book and just change some words and it's totally not plagerism.

[–] Fylkir@lemmy.sdf.org 1 points 1 year ago (1 children)

I see it like this:

Our legal system has the concept of mechanical licensing. If your song exists, someone can demand the right to cover it and the law will favor them. The result of an LLM has less to do with your art that a cover of your song does.

There are plenty of cases of a cover eclipsing the original version of a song in popularity and yet I have never met a single person argue that we should get rid of the right to cover a song.

[–] nosycat@forum.fail 3 points 1 year ago

Sure, you have the legal right to cover someone else's song without asking permission first, but you still have to pay them royalties afterwards, at fair market rates.

[–] BURN@lemmy.world 1 points 1 year ago

AI has been a blanket term for Machine Learning, LLMs, Decision Trees and every other form of “intelligence”.

Unfortunately I think that genie is out of the bottle and it’s never going back in.

[–] Lmaydev@programming.dev 1 points 1 year ago* (last edited 1 year ago)

They are 100% AI. It's an umbrella term. Simple pathing algorithms in games are also AI.

[–] Zeth0s@lemmy.world 1 points 1 year ago

That's absolutely not correct. AI is a field of computer science/scientific computing built on the idea that some capabilities of biological intelligences could be simulated or even reproduced "in silicon", i.e. by using computers.

Nowadays is an extremely broad term that covers a lot of computational methodologies. LLM in particular are a evolution of methods born to simulate and act as human neural network. Nowadays they work very differently, but they still provide great insights on how an "artificial" intellicenge can be built. It is only one small corner of what will be a real general artificial intelligence, and a small step in that direction.

AI as a name is absolutely unrelated with how programs based on the methodologies are built.

Human intelligences are in charge of all copyright part. AI and copyright are orthogonal, people are those who cannot tell the 2 and keep talking about AI.

There is AI, and there is copyright, it is time for all of us to properly frame the discussion on "copyright discussion related to 's product"

[–] Chocrates@lemmy.world 1 points 1 year ago (1 children)

both sound the same to me IMO. Private companies scraping ostensibly public data to sell it. No matter how you word it they are trying to monetize stuff that is out in the open.

[–] Dran_Arcana@lemmy.world 0 points 1 year ago* (last edited 1 year ago) (1 children)

I don't see why a single human should be able to profit off learning from others but a group of humans doing it for a company cannot. This is just how humanity advances at whatever scale.

[–] Chocrates@lemmy.world 1 points 1 year ago

I had a comment about the morality of it at first but I pulled it out. This is not an easy question to answer. Corporations gate keeping knowledge seems weird and dystopian but the knowledge is out there and they are just making connections between it. It also touches on copyright and fair use.

[–] baduhai@sopuli.xyz 0 points 1 year ago (2 children)

I use to tune my LLM model

Large Language Model model

[–] Geek_King@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

Automated Teller Machine Machine, Personal Identification Number Number, Network Interface Card Card

This has been a problem for as long as acronyms have existed (and yes it bothers me too).

[–] some_guy@lemmy.sdf.org 0 points 1 year ago (1 children)

Automated Teller Machine machine.

[–] dragontamer@lemmy.world 1 points 1 year ago

Chai Tea? Chai means tea bro. Do you want coffee coffee with your cream cream?

[–] lolpostslol@kbin.social -1 points 1 year ago

It’s just a happy coincidence for them, they call it AI because calling it “a search engine that steals stuff instead of linking to it and blends different sources together to look smarter” wouldn’t be as interesting to clueless financial markets people

load more comments
view more: next ›