[-] chaos@beehaw.org 10 points 1 week ago

Archive Team often uses the Internet Archive to share the things they save and obviously they have a shared goal of saving a copy of everything ever made, but they aren't the same people. The Archive Team is a vigilante white hat hacker group (well, maybe a little bit grey), and running a Warrior basically means you're volunteering to be part of their botnet. When a website is going to be shut down, they'll whip together a script and push it out to the botnet to try to grab as much of the dying site as they can, and when there's more downtime they have some other projects, like trying to brute force all those awful link shorteners so that when they inevitably die, people can still figure out where it should've pointed to.

[-] chaos@beehaw.org 10 points 2 weeks ago

This, and see also "minmaxing," the process of optimizing something (usually your character in a game) to get minimum penalty and/or maximum benefit, usually ignoring anything like realism or storytelling and focusing entirely on the stats and numbers.

[-] chaos@beehaw.org 9 points 3 weeks ago

The .bin and .cue file are the parts of the actual game disc that you want. The .bin file contains almost all of the data and the .cue file contains some extra information about the structure of the CD. All the rest is Internet Archive stuff (and an image of the game cover of course).

To open it, you can convert it to a .iso disk image instead, which any Linux distribution can open as if it were a real CD. This blog post talks about how to do that. The last paragraph about mount you can probably replace with double-clicking the .iso file in the GUI I would guess.

[-] chaos@beehaw.org 5 points 1 month ago

I know TiddlyWiki quite well but have only poked at Logseq, so maybe it's more similar to this than I think, but TiddlyWiki is almost entirely implemented in itself. There's a very small core that's JavaScript but most of it is implemented as wiki objects (they call them "tiddlers," yes, really) and almost everything you interact with can be tweaked, overridden, or imitated. There's almost nothing that "the system" can do but you can't. It's idiosyncratic, kind of its own little universe to be learned and concepts to be understood, but if you do it's insanely flexible.

Dig deep enough, and you'll discover that it's not a weird little wiki — it's a tiny, self-contained object database and web frontend framework that they have used to make a weird little wiki, but you can use it for pretty much anything else you want, either on top of the wiki or tearing it down to build your own thing. I've used it to make a prediction tracker for a podcast I follow, I've made my own todo list app in it, and I made a Super Bowl prop bet game for friends to play that used to be spreadsheet-based. For me, it's the perfect "I just want to knock something together as a simple web app" tool.

And it has the fun party trick (this used to be the whole point of it but I'd argue it has moved beyond this now) that your entire wiki can be exported to a single HTML file that contains the entire fully functional app, even allowing people to make their own edits and save a new copy of the HTML file with new contents. If running a small web server isn't an issue, that's the easiest way to do it because saving is automatic and everything is centralized, otherwise you need to jump through some hoops to get your web browser to allow writing to the HTML file on disk or just save new copies every time.

[-] chaos@beehaw.org 5 points 1 month ago* (last edited 1 month ago)

If you run the Node.js version, that's all handled for you. It's only if you want to do the party trick of keeping it all in a single HTML file that you need to worry about a plugin or anything like that. And even then, the server version exports to a standalone HTML file with one or two clicks.

Edit to add: it's the only substantial Node package I've ever seen with zero dependencies. Very lightweight and simple to run.

[-] chaos@beehaw.org 11 points 1 month ago

No. The headsets are disabled when the play starts or when the play clock goes below 15 seconds.

[-] chaos@beehaw.org 7 points 2 months ago

Ooh, interesting. I'm kind of surprised to find that I do feel more comfortable with It/Its actually, not so much because of the logical "promotion and demotion cancel out" aspect, but because it's two atypical constructions combined, and that almost pushes it out of intuitive meaning entirely for me. I know the context and convention for each one individually but nothing for both of them at the same time, so I think I'm more open to allowing a meaning to be defined that isn't hierarchical if It assures me that it isn't. (Pure grammar bonus points in that last sentence where this type of capitalization happens to remove an ambiguity!) For He/Him and She/Her, though, I find it hard to set aside the established meaning because it's in wide use and has been for quite some time. Maybe that's a rigidity that deserves to be bent, people push back on the more "out there" neopronouns for similar reasons, but I think it's likely that most people will instinctively react negatively when encountering this, and it's going to be difficult for what I have to imagine is a very small group of people to change the general understanding to something more acceptable.

[-] chaos@beehaw.org 24 points 2 months ago

Hmm... this makes me uncomfortable, and although I don't think it's internalized phobia or anything like that, I want to interrogate that discomfort to see if I can nail it down.

I do think it's difficult or maybe impossible to decouple this practice from indications of power for most people. The only instances of capitalized pronouns in common use that I've seen are the God and Jesus usage, and in some circles, capitalizing pronouns for a dominant in a role play context. "I" getting capitalized is also there, kind of, but that's not a power thing because it's not special, everyone is expected to use it as a language rule. I've also seen things like "oh, sure, that's what They want you to think" or, not quite a pronoun, something like "they want you to fear The Other," maybe less of a power thing but definitely a signal of additional weight and meaning above and beyond the word's usual sense.

I think this is the main source of my discomfort, that this practice is currently used almost exclusively at least as "this word is being used in a special and important context, pay extra attention" and going as far as "I am explicitly signaling that the person being referred to is superior." I don't use He/Him pronouns for God or Jesus because I don't belong to those religions and don't see those entities that way, and I have a fundamental belief in the equality of all humans that makes me uncomfortable putting a person on a pedestal like that.

I feel uncomfortable about it/its pronouns as well for the same reason, I don't like the idea of dehumanizing or objectifying a person, but in that case I actually have some friends who use them. It's easier to take a "well, if it makes you happy, it's no harm to me" attitude if it's asking for a "demotion" so to speak, I think. The personal connection probably does help too, I don't know anyone who wants capitalized pronouns myself.

I've seen Dan Savage use capital pronouns to refer to dominants when answering letters, but that seems to me like Dan stepping into the letter writer's scene space and choosing to go along with the "rule" while he's there giving advice, kind of a "good houseguest" thing. I don't think that's something that the rest of us are obligated to do as a rule. I'd push back on a friend insisting that I refer to their dominant with capitalized pronouns, because whatever their relationship is with each other, their dom isn't my dom, and I didn't agree to that hierarchy, they did.

I think the other discomfort is more of a language and grammar thing, which obviously is less important than an actual person's comfort (see also, the old "they is always plural" chestnut) so I'm not going to assert that this is a reason to disregard a person's wishes, and language rules are subject to change. But in general capitalization is not all that significant in English, which we know because something written in all caps or in all lower case usually has no meaning removed. Words at the start of sentences, proper nouns, and "I" get capitalized, and that's mostly it. It's mostly about readability, because ALL CAPS DOESN'T HAVE AS MUCH CONTRAST but when used sparingly as we usually do, important words stand out with a capital letter. "Demanding" that a particular word be used to refer to yourself in the form of pronouns is in the same ballpark as choosing your own name, obviously completely reasonable and acceptable, but "demanding" that special language rules be used about yourself feels a step beyond that. I don't want to cross into "oh so could you identify as an attack helicopter too" territory, but I do wonder about some of the boundaries on this. Lots of people habitually write in all lowercase, would it be disrespectful to say "oh yeah i saw larry at the empire state building and had a conversation with him" if Larry uses He/Him pronouns? Would Larry be upset about both the name and pronouns, or just the pronouns? I don't think most people would get up in arms about their proper name getting de-capitalized in that context which seems like further evidence that capitalization isn't normally a meaningful aspect of the writing, it's a more mechanical and practical rule, so insisting that for certain people it does need to be made significant feels like more of an imposition to me, and comes right back to the "you need to treat Me as special and more important" feeling that I have.

[-] chaos@beehaw.org 2 points 4 months ago

OPML files really aren't much more than a list of the feeds you're subscribed to. Individual posts or articles aren't in there. I would expect that importing a second OPML file would just add more subscriptions, but it'd be up to the reader app to decide what it does.

[-] chaos@beehaw.org 2 points 8 months ago

I think that joke's been around for a while, but there is the Terry Pratchett line about how if you had a button with a sign next to it saying "pressing this button will end the world, do not touch," the ink wouldn't even have time to dry.

[-] chaos@beehaw.org 9 points 8 months ago

If you ask an LLM to help you with a legal brief, it'll come up with a bunch of stuff for you, and some of it might even be right. But it'll very likely do things like make up a case that doesn't exist, or misrepresent a real case, and as has happened multiple times now, if you submit that work to a judge without a real lawyer checking it first, you're going to have a bad time.

There's a reason LLMs make stuff up like that, and it's because they have been very, very narrowly trained when compared to a human. The training process is almost entirely getting good at predicting what words follow what other words, but humans get that and so much more. Babies aren't just associating the sounds they hear, they're also associating the things they see, the things they feel, and the signals their body is sending them. Babies are highly motivated to learn and predict the behavior of the humans around them, and as they get older and more advanced, they get rewarded for creating accurate models of the mental state of others, mastering abstract concepts, and doing things like make art or sing songs. Their brains are many times bigger than even the biggest LLM, their initial state has been primed for success by millions of years of evolution, and the training set is every moment of human life.

LLMs aren't nearly at that level. That's not to say what they do isn't impressive, because it really is. They can also synthesize unrelated concepts together in a stunningly human way, even things that they've never been trained on specifically. They've picked up a lot of surprising nuance just from the text they've been fed, and it's convincing enough to think that something magical is going on. But ultimately, they've been optimized to predict words, and that's what they're good at, and although they've clearly developed some impressive skills to accomplish that task, it's not even close to human level. They spit out a bunch of nonsense when what they should be saying is "I have no idea how to write a legal document, you need a lawyer for that", but that would require them to have a sense of their own capabilities, a sense of what they know and why they know it and where it all came from, knowledge of the consequences of their actions and a desire to avoid causing harm, and they don't have that. And how could they? Their training didn't include any of that, it was mostly about words.

One of the reasons LLMs seem so impressive is that human words are a reflection of the rich inner life of the person you're talking to. You say something to a person, and your ideas are broken down and manipulated in an abstract manner in their head, then turned back into words forming a response which they say back to you. LLMs are piggybacking off of that a bit, by getting good at mimicking language they are able to hide that their heads are relatively empty. Spitting out a statistically likely answer to the question "as an AI, do you want to take over the world?" is very different from considering the ideas, forming an opinion about them, and responding with that opinion. LLMs aren't just doing statistics, but you don't have to go too far down that spectrum before the answers start seeming thoughtful.

[-] chaos@beehaw.org 5 points 8 months ago

In its complaint, The New York Times alleges that because the AI tools have been trained on its content, they sometimes provide verbatim copies of sections of Times reports.

OpenAI said in its response Monday that so-called “regurgitation” is a “rare bug,” the occurrence of which it is working to reduce.

“We also expect our users to act responsibly; intentionally manipulating our models to regurgitate is not an appropriate use of our technology and is against our terms of use,” OpenAI said.

The tech company also accused The Times of “intentionally” manipulating ChatGPT or cherry-picking the copycat examples it detailed in its complaint.

https://www.cnn.com/2024/01/08/tech/openai-responds-new-york-times-copyright-lawsuit/index.html

The thing is, it doesn't really matter if you have to "manipulate" ChatGPT into spitting out training material word-for-word, the fact that it's possible at all is proof that, intentionally or not, that material has been encoded into the model itself. That might still be fair use, but it's a lot weaker than the original argument, which was that nothing of the original material really remains after training, it's all synthesized and blended with everything else to create something entirely new that doesn't replicate the original.

view more: next ›

chaos

joined 1 year ago