piggy

joined 4 days ago
[–] piggy@hexbear.net 2 points 50 minutes ago* (last edited 46 minutes ago)

As a sidenote "putting things in boxes" is the very thing that itself upholds bourgeois democracies and national boundaries as well.

I mean at this raw of an argument you might as well argue for Lysenkoism because unlike Darwinism/Mendelian selection it doesn't "put things in boxes". In practice things are put in boxes all the time, it's how most systems work. The reality is that as communists we need to mediate the negative effects of the fact that things are in boxes, not ignore the reality that things are in boxes.

The failure of capitalism is the fact that it's systems of meaning making converge into the arbitrage of things in boxes. At the end of the day this is actually the most difficult part of building communism, the Soviet Union throughout it's history still fell ill with the "things in boxes" disease. It's how you get addicted to slave labor, it's how you make political missteps because it's so easy to put people in a "kulak" in a box that doesn't even mean anything anymore, it's how you start disagreements with other communist nations because you really insist that they should put certain things into a certain box.

[–] piggy@hexbear.net 5 points 1 hour ago* (last edited 1 hour ago)

Geopolitical power comes mainly from 3 things, resources, technology, and controlling your "excess" (i.e. the people that do the "worst" work) population. Historically borders have been an effective means to more-or-less control all 3.

Controlling your own borders is really childs play, controlling other people's borders is where the fun really starts. Sykes-Picot for example ensured that the Middle East would fight over resources (water, arable land) and who the "excess" population should be by drawing borders in creative ways preventing the reformation of the Ottoman Empire after its defeat.

[–] piggy@hexbear.net 3 points 1 hour ago* (last edited 1 hour ago)

It doesn't work in the average case. I've seen this tactic from the company that I work for and multiple companies I have contacts at. Bosses think they can simply use "AI" to fix their hollowed out documentation, on-boarding, employee education systems by pushing a bunch of half correct, barely legible "documentation" through an LLM.

It just spits out garbage for 90% of people doing this. It's a garbage in garbage out process. In order for it to even be useful you need a specific type of LLM (a RAG) and for your documentation to be high quality.

Here's an example project: https://github.com/snexus/llm-search

The demo works well because it uses a well documented open source library. It's also not a guarantee that it won't hallucinate or get mixed up. A RAG works simply by priming the generator with "context" related to your query, if your model weights are strong enough your context won't outweigh the allure of statistical hallucination.

[–] piggy@hexbear.net 24 points 1 hour ago (3 children)

"State Beverage" is midcentury marketing brain worms for large agribusinesses. It might be quaint but they sold a shit ton through idiotic reflexive reactionary nationalism.

[–] piggy@hexbear.net 13 points 2 hours ago

By "retire" I mean, when I have aged out of software and I can just burn all my bridges.

[–] piggy@hexbear.net 16 points 2 hours ago* (last edited 2 hours ago) (5 children)

Haha..... Boy do I have stories..... I worked in a terrible evil company (aren't they all but this one was a bit egregious).

The CEO was an absolute moron whose only skill was being a contracts guy and being a money raising guy. We had an internal app for employees to do their work on in the field. He was adamant about getting it in the app store after he took some meeting with another moron. We kept telling him there's no point, and there's a shit ton of work because weh ave to get the app to apples standards. He wouldn't take no for an answer. So we allocated the resources to go ahead, some other projects got pushed way back for this.

A month goes by and we have another meeting, and he says why isn't X done. We told him, we had to deprioritize X to get the app in the app store. He says well who decided that. We tell him that he did. You know how a normal person would be a bit ashamed of this right? Well guess what he just had a little tantrum and still blamed everyone else but himself.

Same guy fired a dude (VP level) because his nepo hire had it out for him. That dude documented all his work out in the open, and then when that section of the business collapsed a day later they had to hire him back as a contractor and the CEO still didn't trust him and trusted his nepo hire, and didn't see the fact that his decision making was the inefficiency.

When I retire I swear to god I'm going to write "this is how capitalism actually works" books about my experiences working with these people.

[–] piggy@hexbear.net 8 points 2 hours ago* (last edited 2 hours ago)

I ended up at Whole Foods the other day because I didn't want to drive across town and capitalism is also when there's no organic milk. Seriously, they had zero fresh dairy products and signs about how hard it was to get organic dairy now of days, but no non-organic dairy cause that's povo filth.

[–] piggy@hexbear.net 5 points 2 hours ago* (last edited 2 hours ago)

Name a better duo than upper class British people and whining about "boarders" during economic downturns.

This is such a trope it's all over their literature. That and having to sell your land for a sub-development you can see from your estate (good heavens). The classic aristocratic complaints of "capitalism is coming for me and only me specifically and I have it worse than anyone else".

[–] piggy@hexbear.net 14 points 2 hours ago* (last edited 2 hours ago)

I'm confident a lot of startups will spring out of the ground that will be developing DeepSeek wrappers and offering the same service as your OpenAIs

This is true. But I don't think OpenAI is even cornering the tech market really. The company I work for makes a lot of content for various things and a lot of engineers are tech fetishists and a lot of executives are IP protectionist obsessives. We are banned from using publicly available AI offerings, we don't contract with Open AI but we do contract with Maia for creating models (because their offering specifically talks through the "steal your IP" problems). So OpenAI itself is not actually in many of these spaces.

But yeah your average chat girlfriend startup is going to remove the ChatGPT albatross from its neck, given it's engineers/founders are just headlines guys. A lot of this ecosystem is really the "Uber but for " style guys.

[–] piggy@hexbear.net 36 points 3 hours ago* (last edited 3 hours ago) (11 children)

I agree with the majority of your comment.

no one is gonna pay thousands of dollars for a corporate LLM that's only 10% better than the free one.

This is simply not true in how businesses actually work. It certainly limits your customer base organically but there are plenty of businesses who in "tech terms" overpay for things that are even free because of things like liability and corruption. Enterprise sales is completely perverse in its logic and economics. In fact most open source giants (e.g. Redhat) exist because of the fact that corps do in-fact overpay for free things for various reasons.

[–] piggy@hexbear.net 13 points 3 hours ago

The reason closed source models are unoptimized is a way to mislead competitors and attempt to move the competition from one of technological prowess to one of courting investment.

[–] piggy@hexbear.net 11 points 3 hours ago* (last edited 3 hours ago) (2 children)

So LLM's the "AI" that everyone is typically talking about are really good at one statistical thing:

"CLASSIFYING"

What is "CLASSIFYING" you ask? Well it's basically attempting to take a data and put it into specific boxes. If you want to classify all the dogs you could classify them based on breed for example. LLMs are really good at classifying better than anything we've ever made and they adapt very well to new scenarios and create emergent classifications of data fed to them.

However they are not good at basically anything else. The "generation" that these LLMs do is based on the classifier and the model, which basically generates responses based on statistically what the next word is. So for example it's entirely possible that if you fed an LLM the entirety of Shakespeare and only Shakespeare and you gave it "Two households both alike" as a prompt, it practically may spit out the rest or Romeo and Juliet.

However this means AI's are not good at the following:

  • discerning truth from fiction
  • following technical processes (like counting r's in strawberry)
  • having "human like" understandings of the connections between concepts (think of the "is soup a salad" type memes)

So… is what I said above really just how AI is being used in the US, and is that the reason for the huge bubble in asset values of companies like Nvidia and Microsoft.

Don't get me wrong, yes this is a solution in search of problem. But the real reason that there is a bubble in the US for these things is because companies are making that bubble on purpose. The reason isn't even rooted in any economic reality. The reason is rooted in protectionism. If it takes a small lake of water and 10 data centers to run ChatGPT, that means it's unlikely you will lose a competitive edge because you are misleading your competition. If every year you need more and more compute to run the models it concentrates who can run them and who ultimately has control of them. This is what the market has been doing for about 3 years now. This is what DeepSeek has undone.

The similarities to BitCoin and crypto bubbles are very obvious in the sense that the mining network is controlled by whoever has the most compute. Etherium specifically decided to cut out the "middle man" of who owns compute and basically says whoever pays into the network's central bank the most controls the network.

This is what 'tech as assets' means practically. Inflate your asset as much as possible regardless of it's technical usefulness.

139
submitted 3 days ago* (last edited 3 days ago) by piggy@hexbear.net to c/slop@hexbear.net
 

uneasy anime battle music plays Lightning crackles around AOC as she writes an epic blue sky post and closes her laptop.

She goes to work the next day and lets a group of the most evil Amerikkkans be racist against the only Muslim and Palestinian women in congress.

Later in the Rotunda...

uneasy anime battle music plays AOC locks eyes with Loren Bobert and and squints as they face off.

"You're so fucking lucky that you didn't hurt my friends."

view more: next ›