this post was submitted on 13 Nov 2024
592 points (95.5% liked)

Technology

59357 readers
4146 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] TankovayaDiviziya@lemmy.world 5 points 1 hour ago

Short on the AI stocks before it crash!

[–] masquenox@lemmy.world 29 points 12 hours ago (1 children)
[–] UnderpantsWeevil@lemmy.world 10 points 11 hours ago (1 children)

I've been hearing about the imminent crash for the last two years. New money keeps getting injected into the system. The bubble can't deflate while both the public and private sector have an unlimited lung capacity to keep puffing into it. FFS, bitcoin is on a tear right now, just because Trump won the election.

This bullshit isn't going away. Its only going to get forced down our throats harder and harder, until we swallow or choke on it.

[–] thatKamGuy@sh.itjust.works 1 points 1 hour ago

With the right level of Government support, bubbles can seemingly go on for literal decades. Case in point, Australian housing since the late 90s has been on an uninterrupted tear (yes, even in ‘08 and ‘20).

[–] recapitated@lemmy.world 14 points 13 hours ago

I think I've heard about enough of experts predicting the future lately.

[–] Blackmist@feddit.uk 36 points 16 hours ago (3 children)

Thank fuck. Can we have cheaper graphics cards again please?

I'm sure a RTX 4090 is very impressive, but it's not £1800 impressive.

[–] lorty@lemmy.ml 3 points 2 hours ago (1 children)

Just wait for the 5090 prices...

[–] Blackmist@feddit.uk 4 points 2 hours ago (1 children)

I just don't get whey they're so desperate to cripple the low end cards.

Like I'm sure the low RAM and speed is fine at 1080p, but my brother in Christ it is 2024. 4K displays have been standard for a decade. I'm not sure when PC gamers went from "behold thine might from thou potato boxes" to "I guess I'll play at 1080p with upscaling if I can have a nice reflection".

[–] lorty@lemmy.ml 1 points 1 hour ago

I think it's just an upselling strategy, although I agree I don't think it makes much sense. Budget gamers really should look to AMD these days, but unfortunately Nvidia's brand power is ridiculous.

[–] explodicle@sh.itjust.works 3 points 10 hours ago

Sorry, crypto is back in season.

[–] tempest@lemmy.ca 5 points 14 hours ago (4 children)

I swapped to AMD this generation and it's still expensive.

load more comments (4 replies)
[–] nl4real@lemmy.world 7 points 12 hours ago

Fingers crossed.

[–] dog_@lemmy.world 6 points 13 hours ago
[–] theacharnian@lemmy.ca 42 points 19 hours ago (1 children)

It's so funny how all this is only a problem within a capitalist frame of reference.

[–] masquenox@lemmy.world 2 points 12 hours ago

What they call "AI" is only "intelligent" within a capitalist frame of reference, too.

[–] LovableSidekick@lemmy.world 16 points 16 hours ago* (last edited 16 hours ago)

Marcus is right, incremental improvements in AIs like ChatGPT will not lead to AGI and were never on that course to begin with. What LLMs do is fundamentally not "intelligence", they just imitate human response based on existing human-generated content. This can produce usable results, but not because the LLM has any understanding of the question. Since the current AI surge is based almost entirely on LLMs, the delusion that the industry will soon achieve AGI is doomed to fall apart - but not until a lot of smart speculators have gotten in and out and made a pile of money.

[–] randon31415@lemmy.world 28 points 19 hours ago (5 children)

The hype should go the other way. Instead of bigger and bigger models that do more and more - have smaller models that are just as effective. Get them onto personal computers; get them onto phones; get them onto Arduino minis that cost $20 - and then have those models be as good as the big LLMs and Image gen programs.

[–] Yaky@slrpnk.net 20 points 15 hours ago (2 children)

Other than with language models, this has already happened: Take a look at apps such as Merlin Bird ID (identifies birds fairly well by sound and somewhat okay visually), WhoBird (identifies birds by sound, ) Seek (visually identifies plants, fungi, insects, and animals). All of them work offline. IMO these are much better uses of ML than spammer-friendly text generation.

[–] mm_maybe@sh.itjust.works 1 points 20 minutes ago

those are all classification problems, which is a fundamentally different kind of problem with less open-ended solutions, so it's not surprising that they are easier to train and deploy.

load more comments (1 replies)
[–] rumba@lemmy.zip 10 points 17 hours ago

This has already started to happen. The new llama3.2 model is only 3.7GB and it WAAAAY faster than anything else. It can thow a wall of text at you in just a couple of seconds. You're still not running it on $20 hardware, but you no longer need a 3090 to have something useful.

[–] dustyData@lemmy.world 5 points 16 hours ago* (last edited 16 hours ago) (1 children)

Well, you see, that's the really hard part of LLMs. Getting good results is a direct function of the size of the model. The bigger the model, the more effective it can be at its task. However, there's something called compute efficient frontier (technical but neatly explained video about it). Basically you can't make a model more effective at their computations beyond said linear boundary for any given size. The only way to make a model better, is to make it larger (what most mega corps have been doing) or radically change the algorithms and method underlying the model. But the latter has been proving to be extraordinarily hard. Mostly because to understand what is going on inside the model you need to think in rather abstract and esoteric mathematical principles that bend your mind backwards. You can compress an already trained model to run on smaller hardware. But to train them, you still need the humongously large datasets and power hungry processing. This is compounded by the fact that larger and larger models are ever more expensive while providing rapidly diminishing returns. Oh, and we are quickly running out of quality usable data, so shoveling more data after a certain point starts to actually provide worse results unless you dedicate thousands of hours of human labor producing, collecting and cleaning the new data. That's all even before you have to address data poisoning, where previously LLM generated data is fed back to train a model but it is very hard to prevent it from devolving into incoherence after a couple of generations.

[–] mm_maybe@sh.itjust.works 1 points 15 minutes ago

this is learning completely the wrong lesson. it has been well-known for a long time and very well demonstrated that smaller models trained on better-curated data can outperform larger ones trained using brute force "scaling". this idea that "bigger is better" needs to die, quickly, or else we're headed towards not only an AI winter but an even worse climate catastrophe as the energy requirements of AI inference on huge models obliterate progress on decarbonization overall.

load more comments (2 replies)
[–] Defaced@lemmy.world 16 points 20 hours ago

This is why you're seeing news articles from Sam Altman saying that AGI will blow past us without any societal impact. He's trying to lessen the blow of the bubble bursting for AI/ML.

[–] Mushroomm@sh.itjust.works 12 points 21 hours ago

It's been 5 minutes since the new thing did a new thing. Is it the end?

[–] CerealKiller01@lemmy.world 27 points 1 day ago (13 children)

Huh?

The smartphone improvements hit a rubber wall a few years ago (disregarding folding screens, that compose a small market share, improvement rate slowed down drastically), and the industry is doing fine. It's not growing like it use to, but that just means people are keeping their smartphones for longer periods of time, not that people stopped using them.

Even if AI were to completely freeze right now, people will continue using it.

Why are people reacting like AI is going to get dropped?

[–] drake@lemmy.sdf.org 1 points 15 minutes ago

It’s absurdly unprofitable. OpenAI has billions of dollars in debt. It absolutely burns through energy and requires a lot of expensive hardware. People aren’t willing to pay enough to make it break even, let alone profit

[–] ClamDrinker@lemmy.world 1 points 1 hour ago* (last edited 1 hour ago)

People differentiate AI (the technology) from AI (the product being peddled by big corporations) without making clear that nuance (Or they mean just LLMs, or they aren't even aware the technology has a grassroots adoption outside of those big corporations). It will take time, and the bubble bursting might very well be a good thing for the technology into the future. If something is only know for it's capitalistic exploits it'll continue to be seen unfavorably even when it's proven it's value to those who care to look at it with an open mind. I read it mostly as those people rejoicing over those big corporations getting shafted for their greedy practices.

[–] finitebanjo@lemmy.world 14 points 18 hours ago* (last edited 18 hours ago)

People are dumping billions of dollars into it, mostly power, but it cannot turn profit.

So the companies who, for example, revived a nuclear power facility in order to feed their machine with ever diminishing returns of quality output are going to shut everything down at massive losses and countless hours of human work and lifespan thrown down the drain.

This will have an economic impact quite large as many newly created jobs go up in smoke and businesses who structured around the assumption of continued availability of high end AI need to reorganize or go out of business.

Search up the Dot Com Bubble.

[–] theherk@lemmy.world 16 points 20 hours ago

Because in some eyes, infinite rapid growth is the only measure of success.

load more comments (9 replies)
load more comments
view more: next ›