this post was submitted on 26 Apr 2024
348 points (95.8% liked)

science

14722 readers
775 users here now

just science related topics. please contribute

note: clickbait sources/headlines aren't liked generally. I've posted crap sources and later deleted or edit to improve after complaints. whoops, sry

Rule 1) Be kind.

lemmy.world rules: https://mastodon.world/about

I don't screen everything, lrn2scroll

founded 1 year ago
MODERATORS
 

A London librarian has analyzed millions of articles in search of uncommon terms abused by artificial intelligence programs

Librarian Andrew Gray has made a “very surprising” discovery. He analyzed five million scientific studies published last year and detected a sudden rise in the use of certain words, such as meticulously (up 137%), intricate (117%), commendable (83%) and meticulous (59%). The librarian from the University College London can only find one explanation for this rise: tens of thousands of researchers are using ChatGPT — or other similar Large Language Model tools with artificial intelligence — to write their studies or at least “polish” them.

There are blatant examples. A team of Chinese scientists published a study  on lithium batteries on February 17. The work — published in a specialized magazine from the Elsevier publishing house — begins like this: “Certainly, here is a possible introduction for your topic: Lithium-metal batteries are promising candidates for….” The authors apparently asked ChatGPT for an introduction and accidentally copied it as is. A separate article in a different Elsevier journal, published by Israeli researchers on March 8, includes the text: In summary, the management of bilateral iatrogenic I’m very sorry, but I don’t have access to real-time information or patient-specific data, as I am an AI language model.” And, a couple of months ago, three Chinese scientists published a crazy drawing of a rat with a kind of giant penis, an image generated with artificial intelligence for a study on sperm precursor cells.

all 46 comments
sorted by: hot top controversial new old
[–] RobotToaster@mander.xyz 164 points 6 months ago* (last edited 6 months ago) (5 children)

In general, if it passed peer review it shouldn't matter how it was written.

The fact the blatant examples apparently made it past peer review show how shoddy the process is though.

[–] floofloof@lemmy.ca 89 points 6 months ago* (last edited 6 months ago) (2 children)

The editing too. I worked as an editor for academic journals and newspapers about 25 years ago, and nothing like these "blatant" examples would get anywhere near print. We'd remove clichéd language too. Everyone seems to have stopped proof reading and editing.

[–] gravitas_deficiency@sh.itjust.works 37 points 6 months ago (2 children)

It’s because all the management level types above the editors all got the brainwave to fire the editors and “just use AI” instead, and entirely failed to understand that the technology is in its infancy and really cannot be considered reliable for things like this, especially if it’s used in such simplistic plug-and-play fashion.

[–] teft@lemmy.world 23 points 6 months ago (1 children)

Publishers not proofreading was long before AI came into play. I’ve noticed it for at least a decade now.

[–] ericjmorey@lemmy.world 7 points 6 months ago

There was a whole season of The Wire that was dedicated to the theme of news publications demanding that more be done with less as budgets were cut. Craigslist was a major factor in the trend as it cut revenue severely for local publications.

[–] aStonedSanta@lemm.ee 9 points 6 months ago

It started before the rise of AI though imo. So I think it was just an easier out

[–] xkforce@lemmy.world 7 points 6 months ago

Good thing the cost to publish went down /s

[–] bassomitron@lemmy.world 44 points 6 months ago (1 children)

The academic paper system has been in trouble for decades. But man, the last 10-20 years seems to have reached such an abysmal state that even the general public is hearing about it more and more with news like this, along with the university scandals last year.

[–] arandomthought@sh.itjust.works 18 points 6 months ago* (last edited 6 months ago) (1 children)

I hate how much time and energy is wasted on this bullshit...
You'd think the smartest people around would come up with a be better system than this. I mean they did, but some of the highest decision-makers have big incentives to keep things as they are. So mark that one more on the "capitalism ruins everything it touches" scoreboard.
¯\(ツ)

[–] ericjmorey@lemmy.world 8 points 6 months ago

Incentives matter in any system. The incentives are perverse right now.

[–] DudeImMacGyver@sh.itjust.works 32 points 6 months ago (2 children)
[–] vorpuni@jlai.lu 20 points 6 months ago

I don't find these journals' processes commendable.

[–] GlitterInfection@lemmy.world 17 points 6 months ago

It's not the reviewer's fault! When they asked ChatGPT to peer review the paper it found nothing wrong.

[–] phdepressed@sh.itjust.works 4 points 6 months ago

Things are very people specific as to what gets through some Journals are known to have lax review but high publication costs. These "predatory" journals and other nepotism stuff has been an issue for a while. The scientific community wants to tackle these issues but it's been hard to make any real progress. Covid politics and now AI have really not helped.

[–] Naz@sh.itjust.works 56 points 6 months ago (2 children)

Incredible.

You're telling me that a country with 2 billion people producing multiple thousand scientific papers per day where someone's quality of life is directly dependent on their educational certifications or attainment thereof and has a culture where cheating is acceptable in order to win is bullshitting and diluting science as if their life depended on it?

Shocked, I tell you, shocked.

[–] Stovetop@lemmy.world 14 points 6 months ago

Speaking with some family I still have over there, to hear them tell it at least, it's lingering generational trauma originating from the Great Leap Forward.

Doing the "right thing" at that point in China's history got you killed. Millions died in the name of collectivization. To survive, people did what they had to: they lied, smuggled, stole, and scammed.

The honest died, the dishonest lived, and so dishonesty became enshrined as a national virtue.

Not too different from capitalism in the west I suppose, since no one good and honest becomes rich. But at least the poor aren't dying in the millions yet, so people still accept the lie that hard work and integrity will result in success.

[–] nifty@lemmy.world 5 points 6 months ago

It’s sadly something that happens anywhere you get incentives and pressure to cheat, https://www.npr.org/2023/06/26/1184289296/harvard-professor-dishonesty-francesca-gino

[–] Daft_ish@lemmy.world 31 points 6 months ago

It's commendable that they discovered this through their meticulous research.

[–] KillingTimeItself@lemmy.dbzer0.com 24 points 6 months ago (2 children)

what the fuck is this image? Is this new new biology?

[–] jwt@programming.dev 25 points 6 months ago (3 children)
[–] Silentiea@lemmy.blahaj.zone 12 points 6 months ago (1 children)

Honestly, the worst part isn't that its penis is so gigantic, it's that the labels are nonsense. An image like that is already not perfectly to scale out anything, so something being exaggerated can be weird but isn't necessarily a deal breaker (albeit that one is pretty darn weird)

usually when things aren't to scale, they tend to extract and isolate them, rather then pull this kind of shit, though sometimes i've seen similar things, just without the monstrous misrepresentation of biology lol.

[–] Gabu@lemmy.world 2 points 6 months ago

That's hilarious

of course. Wouldn't be an article about AI without funny images made by AI.

[–] jenny_ball@lemmy.world 5 points 6 months ago (1 children)

you ever seen rat balls? they are huge.

i have not seen rat balls, but i'm going to assume they don't look like the spire from fucking city 17

[–] Norgur@fedia.io 14 points 6 months ago (2 children)

Well, I think this points more towards the rise of LLM driven spell checkers like Grammarly than to the rise of fraud with LLMs, apart from the blatantly obvious examples that are more telling about the culture in countries like China. If they are so lazy that they just spam out u edited ChatGPT output, how many of their "findings" were just made up over the years before that? This is like stealing. Most people don't start stealing jewelry, they start by shoplifting and become more brazen and blatant by getting away with it. So: what did those."scientists" start out and get away with? How many studies are just lies over fabrications made up by propaganda bureaus and we didn't even notice? How many patients got treatments stemming from those fabrications? How many studies went nowhere because they based on something that was just made up? How many things go wrong because someone wanted to make China look cool and just made up "science"?"

[–] CosmoNova@lemmy.world 10 points 6 months ago (1 children)

If they are so lazy that they just spam out u edited ChatGPT output, how many of their “findings” were just made up over the years before that?

This is the actual story here. I mean these research centers didn't spawn into existence with the rise of AI. They've been publishing works for years, often decades, often with the goal to spread propaganda. Anything that either makes the CCP look good or any other nation look bad is fair game. Let's just remember the batshit insane propaganda they kept releasing during the pandemic, mostly inside China. At some points they claimed the virus came from Italy, the US, Australia, Sweden or pretty much any country that spoke out against China at the time. They dragged 'scientists' in front of cameras to claim how the pandemic was imported via packages from Canada at one point. Meanwhile doctors in Wuhan who tried to warn the world in late 2019 got silenced and vanished.

Long story short, to no one's surprise ChatGPT in research publications is just a symptom of something much worse. Papers from certain places were never trustworthy and the use of LLMs just shows how bad it has been all along.

[–] wizardbeard@lemmy.dbzer0.com 2 points 6 months ago (1 children)

Careful, the tankies might hear you.

While there's no chance that other countries aren't doing this as well, it's always hilarious to me how blatant China and some others can be with this shit.

[–] Norgur@fedia.io 4 points 6 months ago

And how science bullshit websites gobble their bullshit up. Look at the technology communities here on Lemmy. There is not one week without some spurious claim by Chinese scientists who apparently revolutionize batteries two times a month at least, each revolution more hilariously beyond everything physically possible than the previous one.

Yet, most ppl talk about how awesome this tech will be when it's finally in use, blabber about the genius behind the discovery and go into borderline conspiracy mode, suspecting "big oil" or whomever to stop this one like they supposedly stopped all the others. Physics is what "stopped the others", you gullible tech-freak! Reality stopped the one before that! Big oil or pharma or whoever are by no means without guilt when it comes.to stoping innovation, but those things are just made up. It's usually not even very thought through. It's just obvious bullshit.

[–] ericjmorey@lemmy.world 6 points 6 months ago* (last edited 6 months ago)

Academic fraud is in no way a thing that is limited or even disproportionately prevalent in China. Perhaps the flavors of it are biased to one form or another in different cultures, but don't mistake that for more or less fraud in that culture. Perhaps you notice more from China simply because there are simply more Chinese people in the world than any other nation behind Indian people in India.

[–] Omgarm@lemmy.world 11 points 6 months ago (1 children)

I wonder... could I let chatGPT get me a PhD?

[–] dylanTheDeveloper@lemmy.world 3 points 6 months ago

Just don't let it draw any mouse anatomy

[–] Zehzin@lemmy.world 7 points 6 months ago (1 children)

That's why I overuse a Thesaurus

[–] Hugh_Jeggs@lemm.ee 4 points 6 months ago

Yes I dwindle, jade, tax, crumble, impair, weather, decrease, gall, decline, tire, decay, scrape, abrade, fade, waste, shrink, deteriorate, exhaust, erode, scuff, weary, graze, fatigue, diminish, fray, chafe, drain, overwork, grind, cut down, wear out, be worthless, become threadbare, become worn, go to seed, scrape off, use up, wash away and wear my thesaurus thin too

[–] restingboredface@sh.itjust.works 6 points 6 months ago (2 children)

Why couldn't journals require authors to disclose use of any AI tool along with specific prompts used? It shouldn't be too hard to manage that.

[–] Eranziel@lemmy.world 17 points 6 months ago

Do you think every paper writer would comply? Do you think that the actually problematic writers, like those cutting so many corners that they directly paste ChatGPT results into their paper, would comply?

[–] madcaesar@lemmy.world 5 points 6 months ago

Fuck that website and that insane cookie confirm box.

[–] KeenFlame@feddit.nu 1 points 6 months ago

I don't understand? Do they look for outrage from people that don't understand what the tools does?