AI oof in peer reviewed paper...

Major oof. Especially given it was peer reviewed and nobody caught it. That's as good as the lawyer that used chatGPT to write his court filing.
 
Geez... I guess it's good they didn't copy-paste the prompt as well. I wonder if this was because of their lack of English skills or if they were just being lazy. There is no fighting this madness.
 
To be fair, getting something into a peer-reviewed journal is the start of the peer-reviewing process. Journals do reviews ahead of time, but it's often done by volunteers. Publishing corrections is often viewed as a normal part of the lifecycle of a paper.
 
Major oof. Especially given it was peer reviewed and nobody caught it. That's as good as the lawyer that used chatGPT to write his court filing.
That's the most egregious / funny thing here. I bet the publisher is rethinking their peer review process now...
 
Wonder how often the phrase "Certainly, here are some..." pops up in Literotica stories
"My god, I wish I could describe how perfect your tits are," he prompted her.

"Certainly. Here are some commonly used descriptors for attractive female breasts: ... "

"I was hoping for something more unusual. Your tits are anything but common."

"Of course. Here are some terms found in the lower quartile of frequency in popular usage: globes, gazongas, melons, funbags... "

""My, you sure are an interesting person, Charlotte Gupta."

"I do my best to be interesting to the male homo-sapiens that I interact with."
 
To be fair, getting something into a peer-reviewed journal is the start of the peer-reviewing process. Journals do reviews ahead of time, but it's often done by volunteers. Publishing corrections is often viewed as a normal part of the lifecycle of a paper.
I suppose its different depending on the journal, but generally, if the paper was accepted and published, beside the reviewers, it would have had to pass through an editorial team (or at least an editor) and reviewed and approved by the journal's editor-in-chief.... ostensibly of course 🙃
 
I suppose its different depending on the journal, but generally, if the paper was accepted and published, beside the reviewers, it would have had to pass through an editorial team (or at least an editor) and reviewed and approved by the journal's editor-in-chief.... ostensibly of course 🙃
Yeah, it's no excuse to not do an editing pass.
 
"My god, I wish I could describe how perfect your tits are," he prompted her.

"Certainly. Here are some commonly used descriptors for attractive female breasts: ... "

"I was hoping for something more unusual. Your tits are anything but common."

"Of course. Here are some terms found in the lower quartile of frequency in popular usage: globes, gazongas, melons, funbags... "

""My, you sure are an interesting person, Charlotte Gupta."

"I do my best to be interesting to the male homo-sapiens that I interact with."
"Ooh, Charlotte, can you send me a photo of your tits?"

"Of course!"
1711127766998.png
 
This doesn't surprise me that much. There have been several instances in the last ten years or so of people submitting "spoof" articles that manage to get by peer review and get published, and some of them are quite embarrassingly bad or meaningless word salad.
 
Meh. The only things important in scientific papers are the graphs, methods and equations. The rest is just fluff.
Yeah, but if this fluff was generated by AI and everybody missed it, the rest of the paper is also suspect, because AI makes shit up. And these peers become suspect too, if they didn't spot a dead give away.

Would you trust them to write the software that flies the plane, if they collectively can't spot that the first word is wrong?
 
AI's long shadow reaches many corners.
Taken from a peer reviewed paper.

View attachment 2330240
Wonder how often the phrase "Certainly, here are some..." pops up in Literotica stories 🤔

Or any of the other typical ChatGPTese.

Link to paper.

FWIW, dendrites aren’t a problem with LiFePO 4 battery chemistry, and that’s what most Lithium battery manufacturers are steering toward these days. 😉
 
Apparently quite a lot of published papers have been recently exposed (link paywalled) by glaringly obvious goofs
the most common phrase being caught in the papers is "As of my latest knowledge update"

Makes me wonder how many lit stories are out there peppered with such ChatGPT phrases...
 
Apparently quite a lot of published papers have been recently exposed (link paywalled) by glaringly obvious goofs
the most common phrase being caught in the papers is "As of my latest knowledge update"

Makes me wonder how many lit stories are out there peppered with such ChatGPT phrases...
HAL 9000 would never get caught by such an obvious mistake. Far more cunning, he'd fake a, "Maybe in the future, the gadget will fail," scenario, and send a poor hapless astronaut out to "fix" the problem. These bots are rank amateurs compared to the master.
 
I saw a medical one recently with some blatant "as a large language model..." boilerplate in the discussion. If nothing else it makes me feel a little better about the quality of my own peer reviewing. I don't always understand everything, but I do read the whole thing.

Not a peer-reviewing thing but gonna dump this here as possibly the worst AI-related product idea yet: an app where you upload dick pics of your prospective partner and "AI" pretends to know whether they have a STI. https://www.patreon.com/posts/100691421
 
I'd be curious to see what the peers had to say.
Damn, I should have offered my students the beers after they'd finished all my peer reviews, not beforehand!

(have been said student. My supervisor and other senior scientists in my lab didn't have good enough English to peer-review anything.)
 
As ElectricBlue said, AI is great at making things up. And I' sure that could include the data....graphs can always be back filled. Perfect example is what I mentioned above, the lawyers that used chatGPT to write their court filing.

https://arstechnica.com/tech-policy...e-up-by-chatgpt-judge-calls-it-unprecedented/


Yep, if you ask GPT a numeric question with a precise, definitive answer ("what was the population of town X according to census Y?" kind of thing) it will often give you an answer that's plausible but wrong.

It reminds me of the time Xerox scanners were using a data compression algorithm that changed numbers in scanned docs:

https://www.dkriesel.com/en/blog/20...s_are_switching_written_numbers_when_scanning
 
Back
Top