[-] ConsciousCode@beehaw.org 3 points 10 months ago* (last edited 10 months ago)

This is a sane and measured response to a terrorist attack /s Just do terrorism back 100-fold, I guess?

[-] ConsciousCode@beehaw.org 8 points 10 months ago* (last edited 10 months ago)

I think it's moreso a matter of evolution. We know humanoid bodies can do anything we can do, so we start with that and make incremental improvements from there. We already do have plenty of other body shapes for robots (6-axis arms, SPOT, drones, etc) but none of them are general-purpose. Also, the robot shown in the article is pretty clearly not fully humanoid, it has weird insect legs probably because it's easier to control and it doubles as a vertical lift.

[-] ConsciousCode@beehaw.org 5 points 10 months ago

Network effects. The "we" you're referring to could only be like 100 million at most, the vast majority of people don't have the technical know-how to switch, or to articulate exactly why they feel miserable every time they log in for their daily fix.

[-] ConsciousCode@beehaw.org 3 points 10 months ago

Considering prior authorization is predicated on the fact that if they reject enough requests inevitably some people won't fight them, meaning they don't have to pay out, I wouldn't be surprised if they use a slightly better than chance prediction as justification for denying coverage, if they even need an actual excuse to begin with.

[-] ConsciousCode@beehaw.org 15 points 10 months ago

For what it's worth I don't think they're proposing it will "solve" climate change - no single thing can. It's millions of tiny (alleged) improvements like this which eventually add up to taking pressure off of the environment. I see this kind of attitude a lot with stuff like paper straws or biodegradable packaging, as if the idea of a small but meaningful step in the right direction is laughable. It's fine to criticize them for the "improvement" actually being no better than the alternative, but I worry sometimes it comes across like any sort of improvement short of "solving" climate change isn't worthwhile.

[-] ConsciousCode@beehaw.org 3 points 11 months ago

If we had access to the original model, we could give it the same seed and prompt and get the exact image back. Or, we could mandate techniques like statistical fingerprinting. Without the model though, it's proven to be mathematically impossible the better models get in the coming years - and what do you do if they take a real image, compress it into an embedding, then reassemble it?

[-] ConsciousCode@beehaw.org 8 points 11 months ago

I respect your boldness to ask these questions, but I don't feel like I can adequately answer them. I wrote a 6 paragraph essay but using GPT-4 as a sensitivity reader, I don't think I can post it without some kind of miscommunication or unintentional hurt. Instead, I'll answer the questions directly by presenting non-authoritative alternate viewpoints.

  1. No idea, maybe someone else knows
  2. That makes sense to me; I would think there would be a strong pressure to present fake content as real to avoid getting caught but they're already in deep legal trouble anyway and I'm sure they get off to it too. It's hard to know for sure because it's so stigmatized that the data are both biased and sparse. Good luck getting anyone to volunteer that information
  3. I consider pedophilia (ie the attraction) to be amoral but acting on it to be "evil", ala noncon, gore, necrophilia, etc. That's just from consistent application of my principles though, as I haven't humanized them enough to care that pedophilia itself is illegal. I don't think violent video games are quite comparable because humans normally abhor violence, so there's a degree of separation, whereas CP is inherently attractive to them. More research is needed, if we as a society care enough to research it.
  4. I don't quite agree, rights are hard-won and easy-lost but we seem to gain them over time. Take trans rights to healthcare for example - first it wasn't available to anyone, then it was available to everyone (trans or not), now we have reactionary denials of those rights, and soon we'll get those rights for real, like what happened with gay rights. Also, I don't see what rights are lost in arguing for the status quo that pedophilia remain criminalized? If MAPs are any indication, I'm not sure we're ready for that tightrope, and there are at least a dozen marginalized groups I'd rather see get rights first. Unlike gay people for instance, being "in the closet" is a net societal good because there's no valid way to present that publicly without harming children or eroding their protections.
[-] ConsciousCode@beehaw.org 3 points 11 months ago

The legality doesn't matter, what matters is that the sites will be flooded and could be taken down if they aren't able to moderate fast enough. The only long-term viable solution is image classification, but that's a tall ask to make from scratch.

[-] ConsciousCode@beehaw.org 2 points 11 months ago

I think a sizable portion of the Republican voter base are those whose family have voted Republican for generations and they haven't thought enough about it to change their vote out of habit - or even worse, they're voting because they value group cohesion over truth and everyone else around them is Republican and speaks in their rhetoric. Why else would they vote for a party so wildly against their own best interests?

[-] ConsciousCode@beehaw.org 2 points 11 months ago

You're right, apologies. Skimmed too hard

[-] ConsciousCode@beehaw.org 1 points 11 months ago

There’s a lot of papers which propose adding new tokens to elicit some behavior or another, though I haven't seen them catch on for some reason. A new token would mean adding a new trainable static vector which would initially be something nonsensical, and you would want to retrain it on a comparably sized corpus. This is a bit speculative, but I think the introduction of a token totally orthogonal to the original (something like eg smell, which has no textual analog) would require compressing some of the dimensions to make room for that subspace, otherwise it would have a form of synesthesia, relating that token to the original neighboring subspaces. If it was just a new token still within the original domain though, you could get a good enough initial approximation by a linear combination of existing token embeddings - eg a monkey with a hat emoji comes out, you add tokens for monkey emoji + hat emoji, then finetune it.

Most extreme option, you could increase the embedding dimensionality so the original subspaces are unaffected and the new tokens can take up those new dimensions. This is extreme because it means resizing every matrix in the model, which even for smaller models would be many thousands of parameters, and the performance would tank until it got a lot more retraining.

(deleted original because I got token embeddings and the embedding dimensions mixed up, essentially assuming a new token would use the "extreme option").

[-] ConsciousCode@beehaw.org 1 points 11 months ago* (last edited 11 months ago)

There’s a lot of papers which propose adding new tokens to elicit some behavior or another, though I haven't seen them catch on for some reason. A new token would mean adding a new trainable static vector which would initially be something nonsensical, and you would want to retrain it on a comparably sized corpus. This is a bit speculative, but I think the introduction of a token totally orthogonal to the original (something like eg smell, which has no textual analog) would require compressing some of the dimensions to make room for that subspace, otherwise it would have a form of synesthesia, relating that token to the original neighboring subspaces. If it was just a new token still within the original domain though, you could get a good enough initial approximation by a linear combination of existing token embeddings - eg a monkey with a hat emoji comes out, you add tokens for monkey emoji + hat emoji, then finetune it.

Most extreme option, you could increase the embedding dimensionality so the original subspaces are unaffected and the new tokens can take up those new dimensions. This is extreme because it means resizing every matrix in the model, which even for smaller models would be many thousands of parameters, and the performance would tank until it got a lot more retraining.

7

Considering the potential of the fediverse, is there any version of that for search engines? Something to break up a major point of internet centralization, fragility, and inertia to change (eg Google will never, ever, offer IPFS searches). Not only would decentralization be inherently beneficial, it would mean we're no longer compelled to hand over private information to centralized unvetted corporations like Google, Microsoft, and DuckDuckGo.

view more: next ›

ConsciousCode

joined 1 year ago