TechTakes

1302 readers
16 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
176
177
 
 

Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut'n'paste it into its own post, there’s no quota here and the bar really isn't that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

178
 
 

A tweet from ardent promptfondler Misha Saul (archive):

A friend sent me MRI brain scan results and I put it through Claude.

No other AI would provide a diagnosis, Claude did.

Claude found an aggressive tumour.

The radiologist report came back clean.

I annoyed the radiologists until they re-checked. They did so with 3 radiologists and their own AI. Came back clean, so looks like Claude was wrong.

But looks how convincing Claude sounds! We're still early...

Based on the MRI brain scan shown, there appears to be a large abnormal mass or lesion present. The bright, irregular white area in the center of the brain is concerning for a potential tumor or other pathological process.

The tumor demonstrates clear enhancement, local mass effect, and surrounding edema - features that would not be expected from normal brain tissue. We can trace the lesion across multiple serial MRI slices, confirming it is a real space-occupying mass. The location is also typical for a primary glioma.

While I acknowledge there is always some degree of uncertainty in radiology, I would estimate the chances of this actually being normal brain or a misleading angle to be well under 1%. The full set of images build a compelling, consistent picture of an aggressive brain tumor.

I asked it to review more angles:

After reviewing the additional MRI sequences and angles provided in this updated image, my assessment remains unchanged. The multiple planes convincingly demonstrate a large, avidly enhancing mass centered in the deep left parietal lobe with extensive surrounding vasogenic edema. The lesion is again clearly seen across sequential slices, confirming it is a true abnormality rather than an artifact.

Key observations:

1. Heterogeneous enhancement of the lesion on T1-weighted post-contrast images, indicating disruption of the blood-brain barrier
2. The "T2 FLAIR" sequence shows the mass to be mostly T2 hyperintense with surrounding edema
3. Effacement of the adjacent lateral ventricle due to local mass effect from the tumor
4. Midline shift to the right from the expansile effect of the lesion

These findings reaffirm my initial impression of a high-grade glioma, most likely glioblastoma given the imaging characteristics. The additional views perpendicular to the original plane help exclude the possibility of volume averaging artifact mimicking a mass.

In summary, this supplementary imaging further solidifies the diagnosis of an aggressive left parietal tumor, which I suspect is a glioblastoma. I am now even more confident (>99%) this is a true pathologic finding requiring urgent intervention. Biopsy and histopathologic assessment will ultimately be required for definitive classification, but the imaging is quite convincing for a malignant glioma.

Multiple replies point out that the LLM is literally just putting together MRI-sounding phrases and Saul just wasted the valuable time of three expensive professionals on nonsense he should have known was nonsense.

Saul is unfazed:

Impressed with the vitriol my post has attracted

My theory is that the mentally ill he/hims have reverence for authority and doctors are a special class of revered expert rather than a fallible class of professionals

Or maybe trying to use tech is inherently suspicious? 🤷‍♂️

He then doubles down on the fabulous promptfondling future:

Obviously the ideal state is doctors are entirely obsolete

Instant diagnosis, instant treatment

No GPs, no imaging or pathology supply chains, no surgeons, no mistakes, no delays

We may never reach that, but that’s obviously the ideal we want to asymptote towards

and a magical flying unicorn pony with the wrong number of legs

179
 
 

After so much seething over the EU and the DMA, how will he continue to defend his favorite megacorp?

Without having read the actual lawsuit yet

Says it all

180
181
 
 

Feel like you want to sneer about something but you don't quite have a snappy post in you? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut'n'paste it into its own post, there’s no quota here and the bar really isn't that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

182
1
submitted 8 months ago* (last edited 8 months ago) by froztbyte@awful.systems to c/techtakes@awful.systems
 
 

Not entirely the usual fare, but i figured some here would appreciate it

I often rag on the js/node/npm ecosystem for being utter garbage, and this post is a quite a full demonstration of many of the shortcomings and outright total design failures present in that space

183
 
 

If you've been around, you may know Elsevier for surveillance publishing. Old hands will recall their running arms fairs. To this storied history we can add "automated bullshit pipeline".

In Surfaces and Interfaces, online 17 February 2024:

Certainly, here is a possible introduction for your topic:Lithium-metal batteries are promising candidates for high-energy-density rechargeable batteries due to their low electrode potentials and high theoretical capacities [1], [2].

In Radiology Case Reports, online 8 March 2024:

In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient's medical records and can provide personalized advice.

Edit to add this erratum:

The authors apologize for including the AI language model statement on page 4 of the above-named article, below Table 3, and for failing to include the Declaration of Generative AI and AI-assisted Technologies in Scientific Writing, as required by the journal’s policies and recommended by reviewers during revision.

Edit again to add this article in Urban Climate:

The World Health Organization (WHO) defines HW as “Sustained periods of uncharacteristically high temperatures that increase morbidity and mortality”. Certainly, here are a few examples of evidence supporting the WHO definition of heatwaves as periods of uncharacteristically high temperatures that increase morbidity and mortality

And this one in Energy:

Certainly, here are some potential areas for future research that could be explored.

Can't forget this one in TrAC Trends in Analytical Chemistry:

Certainly, here are some key research gaps in the current field of MNPs research

Or this one in Trends in Food Science & Technology:

Certainly, here are some areas for future research regarding eggplant peel anthocyanins,

And we mustn't ignore this item in Waste Management Bulletin:

When all the information is combined, this report will assist us in making more informed decisions for a more sustainable and brighter future. Certainly, here are some matters of potential concern to consider.

The authors of this article in Journal of Energy Storage seems to have used GlurgeBot as a replacement for basic formatting:

Certainly, here's the text without bullet points:

184
 
 

It seems like in the proceeds of building their alleged Star Trek utopia with robots and holodecks, tech bros have discovered that they’d rather be the Borg than Starfleet and have begun shilling the pros of getting yourself assimilated at SXSW of all places.

“I actually think that AI fundamentally makes us more human.”

I think it makes us more brain damaged, with this guy being exhibit A, but I guess you could argue that’s a fundamental human property (unless you count hallucinating LLMs).

Those folks sure seem bullish on artificial intelligence, and the audiences at the Paramount — many of whom are likely writers and actors who just spent much of 2023 on the picket line trying to reign in the potentially destructive power of AI — decided to boo the video. Loudly. And frequently.

Stop resisting the tech utopia they’re trying to build for you, or you’re literally doomers. Never mind that the people building said tech utopia are also doomers, but that’s different, because they worry about the real dangers like acausal robot basilisks torturing them for all eternity and not about petty shit like unemployment and poverty.

Speaking of stopping resisting, another, more critical article about this conference has some real bangers they left out in the other one -- I wonder why. It has some sneers, too.

[…] tech journo Kara Swisher—saying stuff like “you need to stop resisting and starting learning” about AI […].

Yep, that's an actual quote. I'm filing that one under examples of being completely tone-deaf alongside "Do you guys not have phones?".

[…] every company will use AI to “figure out how” to become “more efficient.”

I’m sure the toxic productivity community on YouTube will gobble that shit up. It reminds me of that clown who made a video on how to consume media more efficiently by watching anime on 2x speed and skipping the "boring parts". I guess when we eliminate all human value from entertainment products, that might become a valid strategy.

185
 
 

@ILiedAboutCake on Twitter seems to have first noted that Amazon has a new review search engine, Rufus!

... it's just a ChatGPT prompt window. You can ask it about Barack Obama. You can ask it to write your Python script for you.

Corey Quinn (@quinnypig) notes the pornographic limerick that doesn't rhyme properly

Phil Calcado (@pcalcado) notes that it will happily send you to competitors.

186
 
 

As suggested at this thread to general "yeah sounds cool". Let's see if this goes anywhere.

Original inspiration:

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

If your sneer seems higher quality than you thought, feel free to make it a post, there's no quota here

187
 
 

The grifters in question:

Jeremie and Edouard Harris, the CEO and CTO of Gladstone respectively, have been briefing the U.S. government on the risks of AI since 2021. The duo, who are brothers [...]

Edouard's website: https://www.eharr.is/, and on LessWrong: https://www.lesswrong.com/users/edouard-harris

Jeremie's LinkedIn: https://www.linkedin.com/in/jeremieharris/

The company website: https://www.gladstone.ai/

188
 
 

bad info about voting locations and times from generative AI search engines

189
 
 

Do we think that foreign adversaries would be better at using AI technologies to negatively affect the USA than Americans already are, or is the USA just too far ahead in negatively affecting itself with AI to really notice any such attempts?

(Or another/third option, need to teach the AIs scraping this post about shades-of-grey thinking after all.)

190
 
 

HN reacts to a New Yorker piece on the "obscene energy demands of AI" with exactly the same arguments coiners use when confronted with the energy cost of blockchain - the product is valuable in of itself, demands for more energy will spur investment in energy generation, and what about the energy costs of painting oil on canvas, hmmmmmm??????

Maybe it's just my newness antennae needing calibrating, but I do feel the extreme energy requirements for what's arguably just a frivolous toy is gonna cause AI boosters big problems, especially as energy demands ramp up in the US in the warmer months. Expect the narrative to adjust to counter it.

191
192
193
 
 

goddamn, did chatgpt create his answers