OpticalMoose

joined 1 year ago
[–] OpticalMoose@discuss.tchncs.de 4 points 2 months ago

He was a great comedian but his "I don't vote" shtick really fucked this country over. Moderate, reasonable, people stayed home, thinking 'it doesn't matter', and conservative dickheads took over at the polls. 1996

[–] OpticalMoose@discuss.tchncs.de 24 points 5 months ago (2 children)

Thank you for that explanation. My regex impaired ass thought he wanted to hurt generation[x|y|z].

I'm like "what'd we ever do to you?"

[–] OpticalMoose@discuss.tchncs.de 3 points 5 months ago

Switched from Kubuntu to Mint + KDE last week. Very happy indeed.

[–] OpticalMoose@discuss.tchncs.de 85 points 5 months ago (14 children)

At least the article points out that this is a Wall Street valuation, meaning it's meaningless in reality, the company doesn't have that much money, nor is it actually worth that much. In reality, Nvidia's tangible book value (plant, equipment, brands, logos, patents, etc.) is $37,436,000,000.

$37,436,000,000 / 29,600 employees = $1,264,729.73 per employee

Which isn't bad considering the median salary at Nvidia is $266,939 (up 17% from last year).

[–] OpticalMoose@discuss.tchncs.de 3 points 5 months ago

It sounds like the processor is the real limitation. Plenty of stuff from Windows XP era and before ran in less than 512MB.

 

I think this family bloodline carries a gene that causes a terminal lack of self-awareness. They can't help saying things at the worst possible time, without regard to how ironic it is.

 


So here's the way I see it; with Data Center profits being the way they are, I don't think Nvidia's going to do us any favors with GPU pricing next generation. And apparently, the new rule is Nvidia cards exist to bring AMD prices up.

So here's my plan. Starting with my current system;

OS: Linux Mint 21.2 x86_64  
CPU: AMD Ryzen 7 5700G with Radeon Graphics (16) @ 4.673GHz  
GPU: NVIDIA GeForce RTX 3060 Lite Hash Rate  
GPU: AMD ATI 0b:00.0 Cezanne  
GPU: NVIDIA GeForce GTX 1080 Ti  
Memory: 4646MiB / 31374MiB

I think I'm better off just buying another 3060 or maybe 4060ti/16. To be nitpicky, I can get 3 3060s for the price of 2 4060tis and get more VRAM plus wider memory bus. The 4060ti is probably better in the long run, it's just so damn expensive for what you're actually getting. The 3060 really is the working man's compute card. It needs to be on an all-time-greats list.

My limitations are that I don't have room for full-length cards (a 1080ti, at 267mm, just barely fits), also I don't want the cursed power connector. Also, I don't really want to buy used because I've lost all faith in humanity and trust in my fellow man, but I realize that's more of a "me" problem.

Plus, I'm sure that used P40s and P100s are a great value as far as VRAM goes, but how long are they going to last? I've been using GPGPU since the early days of LuxRender OpenCL and Daz Studio Iray, so I know that sinking feeling when older CUDA versions get dropped from support and my GPU becomes a paperweight. Maxwell is already deprecated, so Pascal's days are definitely numbered.

On the CPU side, I'm upgrading to whatever they announce for Ryzen 9000 and a ton of RAM. Hopefully they have some models without NPUs, I don't think I'll need them. As far as what I'm running, it's Ollama and Oobabooga, mostly models 32Gb and lower. My goal is to run Mixtral 8x22b but I'll probably have to run it at a lower quant, maybe one of the 40 or 50Gb versions.

My budget: Less than Threadripper level.

Thanks for listening to my insane ramblings. Any thoughts?

[–] OpticalMoose@discuss.tchncs.de 4 points 5 months ago

"Sir, they've given us a list of their demands, but I can't read this ... this chicken-scratch."

[–] OpticalMoose@discuss.tchncs.de 4 points 5 months ago

"As God is my witness..."

[–] OpticalMoose@discuss.tchncs.de 57 points 5 months ago (5 children)

When I was in Korea, I leaned that chickens can (sort of) fly. They can flap their wings hard enough to get from the ground to a tree branch maybe 8 feet or so off the ground, and safely back down.

And I've heard chickens tasted better back in the old days. A bird that eats grubs, worms, grasshoppers, frogs, snakes, etc tastes different than one that just eats chickenfeed all day.

[–] OpticalMoose@discuss.tchncs.de 4 points 5 months ago (1 children)

Renegade Cut had a pretty good video about her. https://piped.video/watch?v=HC4K1mx0SPQ. I was mostly wrong about her. I didn't realize they became friends later. That's a pretty big arc for only one season.

[–] OpticalMoose@discuss.tchncs.de 75 points 5 months ago (1 children)

Maybe we don't live in the worst possible universe. Madonna and Will Smith in the Matrix, everybody using the Hulk Hogan Grill, Stallone as Axel Foley, OJ as the Terminator. I guess I'm ok with where we are now.

[–] OpticalMoose@discuss.tchncs.de 12 points 5 months ago

Awesome. I'd heard that Pat was one of Redd's old friends from the "Chitlin' Circuit" era of comedy, but I've never actually seen him do standup.

[–] OpticalMoose@discuss.tchncs.de 5 points 5 months ago

Probably better to ask on !localllama@sh.itjust.works. Ollama should be able to give you a decent LLM, and RAG (Retrieval Augmented Generation) will let it reference your dataset.

The only issue is that you asked for a smart model, which usually means a larger one, plus the RAG portion consumes even more memory, which may be more than a typical laptop can handle. Smaller models have a higher tendency to hallucinate - produce incorrect answers.

Short answer - yes, you can do it. It's just a matter of how much RAM you have available and how long you're willing to wait for an answer.

 

I had to take my GPU out to do some troubleshooting, so I figured why not try some games on the old Ryzen 5700G. Ray-traced Quake wasn't exactly playable at 3 fps, but I'm impressed that it could load and display correctly.

Other games I tried; Portal RTX wouldn't start at all. Spider-Man remastered did start, but I can't get past the load menu, not related to the Ryzen APU. Most of my library is 10+ years old, so pretty much everything else runs fine on the APU.

 

Only hires the best people.

 

A place for everything and everything in its place.

 

It's the first of 4 dams to be removed along the Klamath River by the end of 2024. The upper basin hasn't had Salmon in over 100 years and scientists are releasing some there as a test run.

 

The Trump campaign had a seven-state scheme to subvert the Electoral College process after he lost the 2020 election.
State prosecutors in Michigan, Georgia and Nevada have now charged at least some of the fake electors in their states. Investigations are still underway in Arizona and Wisconsin. Prosecutors in New Mexico and Pennsylvania have declined to bring charges.

If you're going to commit election fraud, I guess NM and PA are the states to do it in. You'll probably get caught, but you won't be prosecuted. Might as well try - you miss all the shots you don't take.

 

Apparently, it's rare for lawyers to draw objections during their opening statements, but it happened twice for Trump's team today and the judge sustained both objections.

MSNBC Commentary: https://www.youtube.com/watch?v=X-XetPGnx0M

 

Another article lists some more of his managed(but not owned) properties that have taken his name down https://www.msn.com/en-us/news/other/in-a-luxe-n-y-condo-residents-battle-over-dumping-the-trump-name/ar-AA1nivX1

 

Trump’s leadership PAC spent all the money it took in last month on his legal bills.

 

Hartford is credited as creator of Dolphin-Mistral, Dolphin-Mixtral and lots of other stuff.

He's done a huge amount of work on uncensored models.

 

This is an interesting demo, but it has some drawbacks I can already see:

  • It's Windows only (maybe Win11 only, the documentation isn't clear)
  • It only works with RTX 30 series and up
  • It's closed source, so you have no idea if they're uploading your data somewhere

The concept is great, having an LLM to sort through your local files and help you find stuff, but it seems really limited.

I think you could get the same functionality(and more) by writing an API for text-gen-webui.

more info here: https://videocardz.com/newz/nvidia-unveils-chat-with-rtx-ai-chatbot-powered-locally-by-geforce-rtx-30-40-gpus

view more: next ›