FatCrab

joined 1 year ago
[–] FatCrab@lemmy.one 1 points 1 week ago (3 children)
  1. her position wrt Israel and Palestine wasn't clear when she was nominated (though I don't think it was all that hard to anticipate, but here we are); (2) the upcoming vote isn't for her nomination to the democratic ticket, is it?

No one is saying they don't wish the practical reality in which we live was better, but we are looking at two realistic choices right now. One choice will not only greatly worsen the situation and almost undoubtedly lead to more suffering and death in the Levant, it is also quite literally the highly preferred choice by Netanyahu. The other has in the past, before soliciting as many US votes as possible, at least displayed a willingness to criticize the Israeli government and modulate US policies regarding it. So I dunno what to tell you. At the end of the day, I'm pro-Palestinians not being murdered, and could give a fuck about signaling on social media, so I make practical choices to facilitate my as-many-Palestinians-as-possible-not-being-murdered preference. Maybe you don't have that in common with me.

[–] FatCrab@lemmy.one 10 points 1 week ago

Moreover, because of the conservative-court lead overturning of chevron combined with bullshittery about standing, FDA, among all other agencies, have at best questionable power.

[–] FatCrab@lemmy.one 3 points 1 week ago (5 children)

If you don't believe that strategic voting is critical to achieving what are inherently long term goals, then we have little in common.

[–] FatCrab@lemmy.one 1 points 2 weeks ago (1 children)

AI in health and medtech has been around and in the field for ages. However, two persistent challenges make roll out slow-- and they're not going anywhere because of the stakes at hand.

The first is just straight regulatory. Regulators don't have a very good or very consistent working framework to apply to to these technologies, but that's in part due to how vast the field is in terms of application. The second is somewhat related to the first but really is also very market driven, and that is the issue of explainability of outputs. Regulators generally want it of course, but also customers (i.e., doctors) don't just want predictions/detections, but want and need to understand why a model "thinks" what it does. Doing that in a way that does not itself require significant training in the data and computer science underlying the particular model and architecture is often pretty damned hard.

I think it's an enormous oversimplification to say modern AI is just "fancy signal processing" unless all inference, including that done by humans, is also just signal processing. Modern AI applies rules it is given, explicitly or by virtue of complex pattern identification, to inputs to produce outputs according to those "given" rules. Now, what no current AI can really do is synthesize new rules uncoupled from the act of pattern matching. Effectively, a priori reasoning is still out of scope for the most part, but the reality is that that simply is not necessary for an enormous portion of the value proposition of "AI" to be realized.

[–] FatCrab@lemmy.one 9 points 2 weeks ago

Summary judgement is not a thing separate from a lawsuit. It's literally a standard filling made in nearly every lawsuit (even if just as a hail mary). You referenced "beyond a reasonable doubt" earlier. This is also not the standard used in (US) civil cases--it's typically a standard consisting of the preponderance of the evidence.

I'm also not sure what you mean by "court approved documentation." Different jurisdictions approach contract law differently, but courts don't "approve" most contracts--parties allege there was a binding and contractual agreement, present their evidence to the court, and a mix of judge and jury determines whether under the jurisdictions laws and enforceable agreement occurred and how it can be enforced (i.e., are the obligations severable, what damages, etc.).

[–] FatCrab@lemmy.one 1 points 3 weeks ago

There's plenty you could do if no label was produced with a sufficiently high confidence. These are continuous systems, so the idea of "rerunning" the model isn't that crazy, but you could pair that with an automatic decrease in speed to generate more frames, stop the whole vehicle (safely of course), divert path, and I'm sure plenty more an actual domain and subject matter expert might come up with--or a whole team of them. But while we're on the topic, it's not really right to even label these confidence intervals as such--they're just output weighting associated with respective levels. We've sort of decided they vaguely match up to something kind of sort approximate to confidence values but they aren't based on a ground truth like I'm understanding your comment to imply--they entirely derive out of the trained model weights and their confluence. Don't really have anywhere to go with that thought beyond the observation itself.

[–] FatCrab@lemmy.one 2 points 3 weeks ago (2 children)

Are you under the impression that I think Teslas approach to AI and computer vision is anything but fucking dumb? The person said a stupid and patently incorrect thing. I corrected them. Confidence values being literally baked into how most ML architectures work is unrelated to intentionally depriving your system of one of the most robust ccomputer vision signals we can come up with right now.

[–] FatCrab@lemmy.one 3 points 3 weeks ago (4 children)

All probabilistic models output a confidence value, and it's very common and basic practice to gate downstream processes around that value. This person just doesn't know what they're talking about. Though, that puts them on about the same footing as Elono when it comes to AI/ML.

[–] FatCrab@lemmy.one 4 points 3 weeks ago (1 children)

Maybe I'm wrong, and definitely correct me if so, but I thought the houthis formed well before the Saudi lead effective genocide occurring in Yemen. In fact, the current conflict is the result of the houthis basically couping the preceding government? If that's the case, it doesn't make much sense to characterize them as a resistance or reactionary force to anything externally?

[–] FatCrab@lemmy.one 4 points 4 weeks ago (1 children)

What? There have been significantly more policy and bill proposals from the Democratic party targeting, among others, black men than from the Republican party (at least, in a positive way). This is kind of a wild assertion. Most of the street interviews and the like I've seen have basically boiled down to misogyny.

[–] FatCrab@lemmy.one 3 points 1 month ago

I've worked on processing submissions for this project. Honestly, it probably ends up just costing them more to do this program, which is mostly just a paid PR activity. The overwhelming majority of submissions, and I mean like 99%, are either not prior art in the sense of patent law or were already retrieved by the law firm on the case.

[–] FatCrab@lemmy.one 2 points 1 month ago (1 children)

I agree. I think the effective entry into the public domain of AI generated material, in combination with a lot of reporting/marking laws coming online is an effective incentive to keep a lot of material human made for large corporate actors who don't like releasing stuff from their own control.

What I'd like to see in addition to this is a requirement that content-producing models all be open source as well. Note, I don't think we need weird new IP rights that are effectively a "right to learn from" or the like.

view more: ‹ prev next ›