AI Allegations Thread

I tested the same text from one of my stories against nine different AI detectors. Five said it was human and three said there was about 34% to 66.67% chance it was AI. One rated it medium risk of being AI.
I was shocked to find that the two sites that said there was some chance it was written by AI offered to fix the text so it would pass an AI detector. I suspect they pinged my text as possible AI to sell their services to make it pass AI detection (which it already did five other times).

https://copyleaks.com/ai-content-detector - human

https://www.scribbr.com/ai-detector/ - 0% AI

https://gptzero.me/ - 56% chance AI

https://writer.com/ai-content-detector/ - 99% human

https://contentdetector.ai/ - 66.67% chance AI

https://ai-detector.net/ - medium risk that your text was written by AI.

https://www.zerogpt.com/ - 34.56% chance AI

https://contentatscale.ai/ai-content-detector/ - passes as human

https://undetectable.ai/ - appears human
 
Last edited:
I tested the same text from one of my stories against nine different AI detectors. Five said it was human and three said there was about 34% to 66.67% chance it was AI. One rated it medium risk of being AI.
I was shocked to find that the two sites that said there was some chance it was written by AI offered to fix the text so it would pass an AI detector. I suspect they pinged my text as possible AI to sell their services to make it pass AI detection (which it already did five other times).

https://copyleaks.com/ai-content-detector - human

https://www.scribbr.com/ai-detector/ - 0% AI

https://gptzero.me/ - 56% chance AI

https://writer.com/ai-content-detector/ - 99% human

https://contentdetector.ai/ - 66.67% chance AI

https://ai-detector.net/ - medium risk that your text was written by AI.

https://www.zerogpt.com/ - 34.56% chance AI

https://contentatscale.ai/ai-content-detector/ - passes as human

https://undetectable.ai/ - appears human
Quite. If AI could rewrite AI to be indistinguishable from HI, that feature would have been/will be incorporated in the usual suspects. What we're seeing is a common manifestation of HI - many HIs are less intelligent than AI and can be parted from their money by a scam which is manifestly absurd in its own terms.
 
My work commissioned somebody to do a rap song about our Mission and Vision (TM) for use as the corporate hold music. Vogon poetry has nothing on it.

Oh. Oh no. And I thought my company was out of touch.

Are... are you okay? Do you need a hug? I'll hug you if you like.
 
Hugs are good! I haven't had to listen to it in quite a while, thankfully, but just thinking about it makes me cringe.
Just hearing about it gave me second order cringe. Is your company American? It's the kind of thing American companies love to do; as a Commonwealthian I look at it and have to fight down the urge to dig for the vodka.
 
My first real job was at Coopers & Lybrand. I suggested that we should kick off every week by singing "Super Cooper", a modified version of ABBA's "Super Trooper". Sadly no-one else thought it was a good idea.
 
My first real job was at Coopers & Lybrand. I suggested that we should kick off every week by singing "Super Cooper", a modified version of ABBA's "Super Trooper". Sadly no-one else thought it was a good idea.
You worked with Philistines. That is top-tier trolling, good job!
 
Just hearing about it gave me second order cringe. Is your company American? It's the kind of thing American companies love to do; as a Commonwealthian I look at it and have to fight down the urge to dig for the vodka.
Strictly Commonwealth nonsense, I'm afraid :-(
 
I tested the same text from one of my stories against nine different AI detectors. Five said it was human and three said there was about 34% to 66.67% chance it was AI. One rated it medium risk of being AI.
I was shocked to find that the two sites that said there was some chance it was written by AI offered to fix the text so it would pass an AI detector. I suspect they pinged my text as possible AI to sell their services to make it pass AI detection (which it already did five other times).

https://copyleaks.com/ai-content-detector - human

https://www.scribbr.com/ai-detector/ - 0% AI

https://gptzero.me/ - 56% chance AI

https://writer.com/ai-content-detector/ - 99% human

https://contentdetector.ai/ - 66.67% chance AI

https://ai-detector.net/ - medium risk that your text was written by AI.

https://www.zerogpt.com/ - 34.56% chance AI

https://contentatscale.ai/ai-content-detector/ - passes as human

https://undetectable.ai/ - appears human

I tried these out on a couple of my GPT-written examples from this post.

zerogpt wasn't loading for me. Of the rest, when tested on the "car wash" example:
  • copyleaks flagged as "AI detected" - success
  • scribbr gave 42% chance of AI - I'm going to be generous and interpret that as a "don't know".
  • gptzero gave 95% AI
  • writer.com gave 97% human (i.e. 3% AI). Fail.
  • everything else gave "human"/"0% AI" results. Fail.
So out of the eight sites that loaded, only copyleaks got it clearly right for both your text and mine, with scribbr and gptzero marginal. Next I tried those three on the first AI-written block from my post; copyleaks once again flagged as "AI detected", gptzero gave 87% AI, scribbr gave 0% AI (fail).

I hit my limit of free tries with a few other examples. copyleaks was still doing okay on the examples I tried, but given the limited number of tries, I couldn't confidently say just how reliable it is.

Even if an AI detector was 95% reliable at flagging human vs. AI texts, that'd still mean dozens of false positives every week for Literotica.
 
It starts with "We're a family here."
When you hear that - run.

There are two kinds of people in this world.

There's the kind of person who works at a company and reads Jim Collins' Good To Great and thinks, I can't wait to go on the company retreat and hear all the plans about how great we're going to become and engage in "team building" exercises. And then we'll have breakout sessions where we brainstorm about how each one of us can make our company be better than all the others, and somebody takes notes.

And then there's the kind of person who says, "You've got to be fucking kidding me."

I'm the latter sort of person.

Don't get me wrong, many companies do great work and it can be deeply satisfying to do your work well and to know that you have done a great job. But there's so much phoniness and fluff. So much fake, "We're all in this together, aren't we?" from the boss who's had half a mind to fire you for the last year.
 
I tried these out on a couple of my GPT-written examples from this post.

zerogpt wasn't loading for me. Of the rest, when tested on the "car wash" example:
  • copyleaks flagged as "AI detected" - success
  • scribbr gave 42% chance of AI - I'm going to be generous and interpret that as a "don't know".
  • gptzero gave 95% AI
  • writer.com gave 97% human (i.e. 3% AI). Fail.
  • everything else gave "human"/"0% AI" results. Fail.
So out of the eight sites that loaded, only copyleaks got it clearly right for both your text and mine, with scribbr and gptzero marginal. Next I tried those three on the first AI-written block from my post; copyleaks once again flagged as "AI detected", gptzero gave 87% AI, scribbr gave 0% AI (fail).

I hit my limit of free tries with a few other examples. copyleaks was still doing okay on the examples I tried, but given the limited number of tries, I couldn't confidently say just how reliable it is.

Even if an AI detector was 95% reliable at flagging human vs. AI texts, that'd still mean dozens of false positives every week for Literotica.
Another problem is that a lot of these companies don't talk about their process for testing, for the same reasons they don't expose their algorithms. The "right" way to do it would be to commission human writers, a lot of them and of varying skill levels, to make new works and then run those through the algorithm to see how it does. Most of them probably don't, instead turning the algorithm loose on texts that were used to train it.
 
Another problem is that a lot of these companies don't talk about their process for testing, for the same reasons they don't expose their algorithms. The "right" way to do it would be to commission human writers, a lot of them and of varying skill levels, to make new works and then run those through the algorithm to see how it does. Most of them probably don't, instead turning the algorithm loose on texts that were used to train it.
Yep.

I'd also add - any system that gives you "X% probability this was written by AI" type outputs is fibbing to you, because outside of cases that are clearly one of the other, that's not actually something that can be determined from the text alone. To estimate that probability, you'd need to know not only what kinds of stories AIs and humans write, but also how many stories are currently being written by humans vs. by AIs. Something which is changing rapidly and I expect hard to measure.
 
Honestly, I don't know why Lit is having this problem to begin with.

This is how it should be (imo).

Author: "Here's my story."

Lit: *inspects the story* "Hmmm... This seems like it could be AI generated."

Author: "It isn't."

Lit: "Please sign this digital waiver, which claims that your work is indeed your own. If it is discovered later on that your work is indeed AI generated, then it will be removed, and you will be subject to an account ban."


The truth is, AI generated stories (at the moment) suck. If people post them, they won't be popular, so why does Lit care, as long as it can claim plausible deniability?
 
There are two kinds of people in this world.

There's the kind of person who works at a company and reads Jim Collins' Good To Great and thinks, I can't wait to go on the company retreat and hear all the plans about how great we're going to become and engage in "team building" exercises. And then we'll have breakout sessions where we brainstorm about how each one of us can make our company be better than all the others, and somebody takes notes.

And then there's the kind of person who says, "You've got to be fucking kidding me."

I'm the latter sort of person.

Don't get me wrong, many companies do great work and it can be deeply satisfying to do your work well and to know that you have done a great job. But there's so much phoniness and fluff. So much fake, "We're all in this together, aren't we?" from the boss who's had half a mind to fire you for the last year.
But team building can be fun.
 
Definitely understand that now. If I ever run into it in the future, I will heed this advice. Thanks.
Even the text boxes that can save, like this one in the forums, can be unreliable. Write in a word processor, save and edit at your liesure, it's why they exist. You're least likely to lose your work from a page refresh or site maintance, browser crashing, phone ot laptop dying, etc. Bonus is you have a master copy. Always have a master copy, several to be on the safe side. I use MSWord mobile and don't outright trust it, or OneDrive. I save copies to my sd card, Google drive, and Dropbox. For stuff I've written on a computer; a thumbdrive and portable hard drive as well. The only real exception is FanFictionNet, it lets you upload into a personal file space to then edit and upload with a year to date expiration. I still do all that stuff, though.
 
This happened to me twice in the last couple of months. I understand the need to make sure people aren't using AI's to write their own stories (that's just plain cheating and you should be ashamed of yourselves), but they need to refine their algorithm for what stories could be written by an AI. I just had to resubmit the same stories with a note to the admins about why it was originally rejected and assuring them the story wasn't written by an AI, and then the stories were accepted. Hopefully they could fix this situation because it would be less problematic for both sides to avoid these false AI flags.


...
 
I may have a theory for some of these false AI flags. It may have already been talked about here, but there are too many posts in this thread to review to see if the subject has been brought up. The two stories I submitted that were sent back because of this situation, I used a different grammar/spell check program that also suggested better sentence structure. It could be possible that the program mad the stories look TOO well written to be written by a human. Let's face it, even with the best writers here in Lit, you can find some grammar issue. But if they look too well written for any regular person, or even a grammar program, to write, it could make the AI detectors think it wasn't written by a human.

Has anyone else seen this as a possibility?


...
 
Back
Top