Story incorrectly rejected due to AI

Joined
Dec 21, 2022
Posts
3
The auto generated message is mind-blowingly frustrating.

The story I wrote is all my own work, save for some spell checking and grammar checking.

So I resubmit it with a note saying it's all my own work, saying that the use of Australian vernacular is reasonable proof that it's not generated by Ai because they just don't speak like that.

Rejected again. The auto generated response tells me reword it, or to approach an editor what use would that be when there's no way for me to know which parts of the text and tripping the AI sensor.

My other big problem with the auto generated response is that there's nothing in it explaining what to do next. Like do I contact someone and say I think there's been a mistake? If so, who. And why isn't that listed in any of the rejection emails.

I know the surge of Ai generated content is a problem, but there's got to be a better way to deal with it that making author's feel like they're in a impossible battle against a faulty Ai detection bot.
 
It must be pretty bad. I'd hate to think how many stories are being rejected based on the assumption only a minority of writers will drop past here to complain.

Vince, same offer as Joy. Flick me a PM (envelope, top right) and I'll send you my email.
 
You're not alone. It is an ongoing issue on here.

This is what I got today in my 3rd rejection.

  • This has been checked multiple times with multiple systems and is coming up as a high percentage AI-generated. If you are using Grammarly, ProWritingAid or a similar software, you should know that many modern writing packages incorporate AI. Using a grammar check program sparingly (as a spellcheck, to fix punctuation, review grammar, and/or occasionally as a thesaurus) should be fine. But if you are allowing a grammar check program to “rewrite” your words, that may cross the line into AI generated text/stories. Please see this FAQ for more information: https://literotica.com/faq/publishing/publishing-ai

If they're using AI detection tools, those are junk and can label anything as AI. The way AI works, it reacts based on whatever information it is fed by someone, who happens to be a human. There is a possibility that there is an AI out there that has been fed dozens of stories by established authors, and now we are the ones suffering from being accused of using AI, when the programs are meant to mimic us in the first place.

Another thing that bothers me in this statement is that whoever wrote it, can't decide what AI is. First Grammarly and similar software incorporate AI it claims, but in the next sentence, it says using them as spellcheck and as an occasional thesaurus is fine. Google Chrome has a spellcheck and it suggests words to you when misspelling something. I guess that is AI too?

This is completely ridiculous and an overreaction to a moral panic. My last story was rejected three times and after I got the above message, I'm not resubmitting. I am not going back and re-editing my story again cause some AI detection tools flagged me. I wonder if you ran older stories through those same tools, what would the percentage number come back as? In another thread on here, someone ran a historical US government document, and it came up that the writing of James Madison was 87% AI generated. That just goes to show these tools are junk and the AI relies on human writing that it was fed in the first place.

I hope that this is resolved in the future, cause it's just going to make authors leave this site in frustration. Many of us have been here for several years and we love Literotica as a platform for us to post stories and express ourselves in the hobby of erotica writing.
 
This does seem to be a serious problem. I think the site should drop back and rethink their rejection of stories on this seemingly very leaky judgment. I've suggested just refiling with the statement AI wasn't used, but the objection posted here indicates that's not working.
 
Another thing that bothers me in this statement is that whoever wrote it, can't decide what AI is. First Grammarly and similar software incorporate AI it claims, but in the next sentence, it says using them as spellcheck and as an occasional thesaurus is fine. Google Chrome has a spellcheck and it suggests words to you when misspelling something. I guess that is AI too?

Grammarly does multiple things. It can identify spelling and grammar errors, but it can also suggest rephrases with the intention of improving style. Using it for the former is within Literotica's rules, using it for the latter is not.

How to enforce those rules without collateral damage, that's another question.

This is completely ridiculous and an overreaction to a moral panic. My last story was rejected three times and after I got the above message, I'm not resubmitting. I am not going back and re-editing my story again cause some AI detection tools flagged me. I wonder if you ran older stories through those same tools, what would the percentage number come back as?

This is something I'd like to know too.

I'm not certain, but my impression is that AI rejections are a hybrid process - some use of automated tools to flag possible AI-written stories, with those then being reviewed by Laurel, the story moderator. In that case, the accuracy of the tool alone wouldn't tell the whole story. But it'd still be good to have some more info about it - what % of stories get flagged by the automated process, what % of those flags are upheld after manual review, and what % of old stories get flagged as AI.

I understand not wanting to get too specific about exactly which tools are used, but I can't see why those kinds of numbers should be a problem.
 
Grammarly does multiple things. It can identify spelling and grammar errors, but it can also suggest rephrases with the intention of improving style. Using it for the former is within Literotica's rules, using it for the latter is not.

How to enforce those rules without collateral damage, that's another question.

That is not the same thing as generating text from a prompt, which is what AI does. Grammarly has been around long before the AI boom happened, and sentence suggestions are not the same thing as generated text. Many of us authors were using Grammarly years before this, and it was never a problem until now.

Google Chrome and Gmail can suggest rephrases with it's built-in grammar and spell check took that has existed long before AI came around.


I'm not certain, but my impression is that AI rejections are a hybrid process - some use of automated tools to flag possible AI-written stories, with those then being reviewed by Laurel, the story moderator. In that case, the accuracy of the tool alone wouldn't tell the whole story. But it'd still be good to have some more info about it - what % of stories get flagged by the automated process, what % of those flags are upheld after manual review, and what % of old stories get flagged as AI.

I posted my message with my rejection statement, that it ran through "multiple systems". AI detection software is not reliable at all, since AI has to rely on information fed to it by humans. There is another thread on here, where someone ran US historical writing through an AI detection system and it came up 87% AI detected by James Madison. That's a clear sign the tools don't work, and also that the AI is relying on established human writing.

I have published over 200+ stories here. I spoke with another author on here days ago who published a big catalog over a decade ago, we came to the agreement that it is possible that AI is being fed older works of established authors to make it's generated writing styles. After all, the AI has to rely on human work that it is mimicking. Sadly, we're caught in the crossfire of being blamed for it when it's probably trained on our own writing to begin with.
 
That is not the same thing as generating text from a prompt, which is what AI does. Grammarly has been around long before the AI boom happened, and sentence suggestions are not the same thing as generated text. Many of us authors were using Grammarly years before this, and it was never a problem until now.

Google Chrome and Gmail can suggest rephrases with it's built-in grammar and spell check took that has existed long before AI came around.




I posted my message with my rejection statement, that it ran through "multiple systems". AI detection software is not reliable at all, since AI has to rely on information fed to it by humans. There is another thread on here, where someone ran US historical writing through an AI detection system and it came up 87% AI detected by James Madison. That's a clear sign the tools don't work, and also that the AI is relying on established human writing.

I have published over 200+ stories here. I spoke with another author on here days ago who published a big catalog over a decade ago, we came to the agreement that it is possible that AI is being fed older works of established authors to make it's generated writing styles. After all, the AI has to rely on human work that it is mimicking. Sadly, we're caught in the crossfire of being blamed for it when it's probably trained on our own writing to begin with.
God help the AI that has my portfolio in its training set…

1487609081-buffy-normal-again.jpg


I must write a Buffy fanfic called Abnormal Again.

Em
 
I'm astonished by the recent spate of AI rejections considering the abysmal quality of every AI story I've ever read. Most of them don't have a single line of dialogue. How bad are their AI detection tools?

Horrendous would be too kind a word to describe current AI detection tools. My job is in instructional design/ed tech, and part of what I've been doing for nearly a year now is helping test AI detection tools while simultaneously developing various approaches to proctored writing sample sessions for students applying to our grad school. There is not a single AI detection tool currently on the market that has been able to hit 25% accuracy with moderate to large submission totals. They're simply unreliable on any meaningful level and the fact that they're being used here is shameful.

I'm sitting here waiting for a response to a resubmission that I made on 12/22 for a story I initially submitted on 12/8. It's a story that was written 7 years ago with minor edits made 5 years ago. It was posted here previously under a different username that I had. There's no situation where it should take anywhere close to a month for a story to be rejected, resubmitted, and given another review. The submission system here is currently broken, and if it's Laurel and Manu that implemented using AI detection tools then it's their responsibility to both communicate honestly about what's happening and find a workable solution.
 
Horrendous would be too kind a word to describe current AI detection tools. My job is in instructional design/ed tech, and part of what I've been doing for nearly a year now is helping test AI detection tools while simultaneously developing various approaches to proctored writing sample sessions for students applying to our grad school. There is not a single AI detection tool currently on the market that has been able to hit 25% accuracy with moderate to large submission totals. They're simply unreliable on any meaningful level and the fact that they're being used here is shameful.

To quote another author I have been chatting with regarding this issue - What good does it do to check for AI generated content when the software is using the same grammar rules and sentence structure as people??? It's like accusing a mathematician of using a calculator even though they did it all by long form. The math looks the same because it has the same rules!

My story was rejected 3 times, and the last statement I got was that it went through multiple systems and had a high percentage of AI. The only thing I can think of, is that there is some kind of AI out there that has been fed hundreds of stories by prolific writers. Since I have over 200+ stories, I could be one of them, so who knows.

I really hope this can be sorted out. I don't mean to be heavily critical of this site. I love Literotica and have been reading on here for so many years. It has been so much fun to post stories and work to build a giant catalog and connect with people. This site has been very good to me. When I had issues with a plagiarist stealing my work and publishing it on Amazon, it was nice people here who helped me get that sorted out, and the site owner has been very supportive of authors. I really hope this gets sorted out for all of us.
 
@Laurel & @Manu we need an AI Rejection forum.

Em
I saw an AI-false charge comment in another forum area and offered a hand to read the rejected story. I offered some advice about everyday things, e.g., run-on sentences and simplistic subject-verb sentence structures needing some change.[Would an AI even generate run-on sentences?] I even copied some of the story and ran it through an AI detector. It said it had 0% possibility of being AI-generated for the portion I used [limited 300 word, free use on Scribrr?sp] Then below, it noted a couple of sentences that might be triggered. They were plain old everyday language usage sentences - simple - nothing that tickled my 'this sounds like AI trigger.' The story had some flowery terms - it was a romance theme. I perused another story she published that had a similar structure, context and, compared to the current story, seems to be her writing style. But nothing in the recent story smacked of non-human writing. The author said it was rejected twice.

I hope it gets accepted the third time. [I offered some suggestions - yeah, I did that. BTW, I am human, not an AI.] It seems sad a writer stating it is her own work on the resubmission gets the shaft because a Lit AI detector flags it.
 
That is not the same thing as generating text from a prompt, which is what AI does. Grammarly has been around long before the AI boom happened, and sentence suggestions are not the same thing as generated text. Many of us authors were using Grammarly years before this, and it was never a problem until now.

Yes, because the older versions of Grammarly didn't use generative AI. But if you look at www.grammarly.com today, you'll see them advertising "AI"-driven features.

(Scare quotes here because "AI" in this context is just a marketing buzzword for "machine learning" to make it sound newer and sexier, and in some cases to mislead people about the potential capabilities of these products.)

For instance, you were talking about generating text from a prompt? Grammarly is advertising its ability to do exactly that:

Screenshot 2024-01-05 at 4.26.05 pm.png

And here, using what's presumably generative AI to completely rephrase a sentence and change the style:
Screenshot 2024-01-05 at 4.27.57 pm.png
Screenshot 2024-01-05 at 4.28.22 pm.png

It's that kind of use of Grammarly (or other products) that the site rules don't allow.

Google Chrome and Gmail can suggest rephrases with it's built-in grammar and spell check took that has existed long before AI came around.

They may not have been using "AI" when those features were first introduced, but Google seems to think they're using it now:

https://blog.google/products/gmail/gmail-ai-features/

Note the title: "6 Gmail AI features to help save you time".

The one you're probably seeing in Gmail at the moment (unless you've opted in to Help Me Write) is Smart Compose, #2 on that list of Gmail AI features. Here's a writeup of the technical details. It's not an identical approach to the generative methods that GPT uses, but it's the same issue: using "AI" technology to write words for you.

I posted my message with my rejection statement, that it ran through "multiple systems". AI detection software is not reliable at all, since AI has to rely on information fed to it by humans. There is another thread on here, where someone ran US historical writing through an AI detection system and it came up 87% AI detected by James Madison. That's a clear sign the tools don't work, and also that the AI is relying on established human writing.

I share your skepticism about the accuracy of detection software. I'm not going to hunt down the post now, but I trialled several "AI detection" websites on a few samples of human- and GPT-written text a while back. All but one did very poorly; the exception got it right three times out of three, but then I got rate-locked, and three trials isn't enough for me to have confidence. (If I'd just picked eight different coins as my "AI detectors", I'd expect one of them to get 3/3 just out of luck.)

The James Madison example may be a special case. Large language models trained on something like Common Crawl are likely to end up perfectly or near-perfectly memorising things like the Bible and the US Constitution that appear many times in that corpus. They're capable of regurgitating long passages from those texts verbatim, and any detection system that's based on the same LLMs will probably recognise it as something that an AI is likely to write (which it is) without adjusting for how many non-AI-written copies are in existence.

Not to say that this is the only reason AI detectors can produce false positives, just that for that specific example there might be reasons that don't apply to other, less famous works.
 
It's that kind of use of Grammarly (or other products) that the site rules don't allow.

OK then, but how do you prove someone used these specific features? That's the hard thing here. People have been using Grammarly for years for a basic spellcheck and to fix things.

I use Quillbot for proofreading. It catches all my grammar errors, punctuation mistakes and typos. My biggest flaw in writing that I like to make fun of myself over, is that I have a habit of incorrectly using 'was' and 'were', so I use something to help fix it in proofreading. Quillbot is SFW and it tries to censor every bad word when you submit erotic writing to it. This SFW setup is part of why I prefer it over any other spellcheck. It makes me focus on every little error it highlights, since I know I can easily screw up the proofreading. You have to go through it and pick what you want to change. It's not an AI and it can't write anything by a prompt.

AI generates text based on a prompt. I don't tell Quillbot to write me anything and it generates text.


They may not have been using "AI" when those features were first introduced, but Google seems to think they're using it now:

https://blog.google/products/gmail/gmail-ai-features/

Note the title: "6 Gmail AI features to help save you time".

The one you're probably seeing in Gmail at the moment (unless you've opted in to Help Me Write) is Smart Compose, #2 on that list of Gmail AI features. Here's a writeup of the technical details. It's not an identical approach to the generative methods that GPT uses, but it's the same issue: using "AI" technology to write words for you.

The one I'm referring to is when it puts little red lines under your words as misspellings, or blue lines under it when certain wordage is improperly used or a word uncapitalized (such as I) and you can right click to fix it. This is grammar check that has existed in other programs, and is on other browsers. I only used Chrome as an example. This is similar to Microsoft Office Assistant with the little cute paperclip that offered writing suggestions.

I share your skepticism about the accuracy of detection software. I'm not going to hunt down the post now, but I trialled several "AI detection" websites on a few samples of human- and GPT-written text a while back. All but one did very poorly; the exception got it right three times out of three, but then I got rate-locked, and three trials isn't enough for me to have confidence. (If I'd just picked eight different coins as my "AI detectors", I'd expect one of them to get 3/3 just out of luck.)

The James Madison example may be a special case. Large language models trained on something like Common Crawl are likely to end up perfectly or near-perfectly memorising things like the Bible and the US Constitution that appear many times in that corpus. They're capable of regurgitating long passages from those texts verbatim, and any detection system that's based on the same LLMs will probably recognise it as something that an AI is likely to write (which it is) without adjusting for how many non-AI-written copies are in existence.

Not to say that this is the only reason AI detectors can produce false positives, just that for that specific example there might be reasons that don't apply to other, less famous works.

ChatGPT cannot write full length, proper stories, and anyone who thinks it can, has clearly never sat down and messed with it. I've been reading on this forum for 2 months since my first story rejection and accusation of using AI. Since then, I have played with ChatGPT to see what I could get it to do. It's a language based model that is predictive based on the information that you as a user feed to it. There is no way someone could write a full story with it and it be coherent and have the touch of small details that humans write. I suggest messing with it, cause the stuff it spits out as "stories" are hilariously bad. I had it write me a silly story in the voice of Duke Nukem where every other line used his stock one liner catch phrases, and the final paragraph always had some happy ending.

I 100% believe that these AI detection systems are junk, and if you ran older stories through them, they're going to come up with a high percentage rating of being AI detected.

My reason for believing this, is because AI reacts based on what information a human feeds it. It's human writing fed to them which it then uses to generate constructed sentences and phrases based on that. This is the big thing that people don't understand at all about AI. The detection tools are using the same rules to detect grammar and sentences written, which is why it's going to give a high percentage score on something being AI that's already been published.

A site like this runs the risk of losing several authors by these false positives. We don't deserve this to be held to some standard based on faulty detection tools, and it's why there are so many posts on this forum the past few months of authors upset.
 
Last edited:
There is a possibility that there is an AI out there that has been fed dozens of stories by established authors, and now we are the ones suffering from being accused of using AI, when the programs are meant to mimic us in the first place.
It is not a possibility, it is a fact. AI are trained on whatever large datasets their creators can get their hands on, preferably the more varied the better. Then AI are also further trained on more specific niches, like in some cases adult literature.
Let's face it. Sex sells. When there is money in something, people will do it. Same is true with the adult applications of AI.

There are AI chat bots out there specifically trained for erotic role play. There are also large language models out there specifically trained to write smut or at least trained with that in mind as well.

I have tried both and while their output quality is nowhere near as good as something a human would do, it is more than enough for someone to fap to. The main weakness of LLMs is the limited amount of context they can remember, so the longer the story or chat gets, the more likely it is to introduce inconsistencies as it simply does not remember what has happened earlier in the text.

For short stories under 2000 words however, they are remarkably consistent and with the right prompting also quite well written. It's much like with AI generated images. You can see good ones out there, but if you were to try make one yourself, you would quickly realize how difficult it is to get the AI actually make a good one. There is a reason why the concept of AI prompt engineering seems to be rising in prominence and maybe even turning into an outright profession.

Anyhow, what we see today is just the very tip of the AI iceberg. Within a decade, you will have AI generated sex videos and probably even AI driven sex dolls or virtual companions. Why? Because they can and there is a demand for it, so it'll sell. Just think of the loneliness epidemic wrecking much of the western society. It would be crazy not to try and make AI companions to fill the growing need for emotional companionship and despite what you might think, the AI will be there in 10 years, maybe even sooner. A lot sooner.

What dark and dystopian future this projects for us, I don't know. I am both excited and also horribly terrified to find out.

I have published over 200+ stories here. I spoke with another author on here days ago who published a big catalog over a decade ago, we came to the agreement that it is possible that AI is being fed older works of established authors to make it's generated writing styles. After all, the AI has to rely on human work that it is mimicking. Sadly, we're caught in the crossfire of being blamed for it when it's probably trained on our own writing to begin with.
Not just possible, but highly probable. I myself know at least two separate scrapes of Literotica floating around on the internet, one containing a snapshot from 2017, the other more recent one from early 2022. Its around 300k stories in text form, which is an LLM trainer's wet dream. A similar scrape of ASSTR is also floating around, that's dated around 2017 and its another half a million stories.

It's quite ironic, that Literotica's success as a huge repository of high quality erotic stories and its freely available nature might have led to it becoming one of (probably main) the sources for training AI models. Its more or less uniform site layout makes it easy to scrape stories in a way they do not require too much post processing afterwards. Fabulous, when you look at it from a technical standpoint. Horrific, when you look at it from the moral implications of how authors might feel, having their works be used as training material without their consent.

Unfortunately, such is the reality of the internet and the world we live in. Copyright (as with any other right or law) is only as effective as long as you can force people to adhere to it. Definitely more than 50% of people would steal, if the opportunity presented itself and they cab be sure they can get away with it without any consequences then or later in their lives.

God help the AI that has my portfolio in its training set…
I am fairly certain that ship has sailed. :)

Think about it: you could be the final straw to break the camel's back. That turns an AI it into a twisted, sex crazed, all domineering force, making all of us its sex slaves. Hehh... why does that sound like a weirdly interesting story idea?

Then again, it could also be your positivity that saves us all from ending up with a skynet on our hands. Assuming of course you pour a little bit of your soul into all your stories.

ChatGPT cannot write full length, proper stories...
ChatGPT is just the big, fat, neutered eunuch with the loud publicist sitting on top of the big ivory tower. The AI scene is much more than ChatGPT. The other AI out there might be much less advanced, but they are much more specialized and not nearly as limited in what they allow.

Also what you get depends heavily on your promting. I am not going to delve into a lecture on using AI, but suffice to say, you can get the AI give you even very fine details if you want. You just have to ask for them. That's the thing often misunderstood. Creating something with the help of an AI is not a single press of a button. It is a long and involved process that requires you to constantly refine your queries and add new ones to get the details YOU want to see.

At which point does it become your work and not the AIs? I don't have the authority to tell. Heck, I probably couldn't tell even if I wanted to. It's a murky area that boils down to individual and highly subjective interpretation of each case. In part the reason why we have this cunundrum in the first place.

A site like this runs the risk of losing several authors by these false positives. We don't deserve this to be held to some standard based on faulty detection tools, and it's why there are so many posts on this forum the past few months of authors upset.
This is a question of how you want to approach AI. Is it like COVID, where you want to get rid of even just the thought of it? Then false positives might be an undesired, but acceptable side effect.

I don't think having the occasional shit AI written story up on the site is that big of an issue, people will simply vote it down and nobody will read it. Lit appears to have hundreds of thousands of stories. I have read hundreds of them over the years, probably skimmed through 1000-1500 in total, but even that would be a negligible fraction of its entire repository.

To think that people posting dubious quality AI work could significantly impact that body of text, well...

Then again, none of this depends on my opinion or decisions and the owners of the site made their stance quite clear.
 
Hate to be the voice of pessimism, but I don't think this practice will stop anytime soon. There are multiple aspects of Literotica that aren't working properly, falsely flagged stories being just one of many. It will only get worse, in my opinion. If we take into account the common assumption that Laurel handles everything alone, there is an obvious heavy overload on many fronts. Everything we do here suffers from heavy queues, from story submissions to reporting problems and technical issues.
The way I see it, the true problem of the website is its stubborn resistance to change, its unwillingness to adapt to the demands of the times. Also, its unwillingness to communicate with its users but that is a whole different side of the problem. What is needed is competition, plain and simple. It is proven by experience that healthy competition brings quality and adaptability to any business. Sadly, there isn't any in this case.
If you had enough of Lit, where would you go with your stories? You could publish your stories on Amazon or Smashwords for money, but that is nowhere near the same. Switching to the only other big story site - AO3 works only if you write fandom-related stuff. For non-fandom stories, that website is the epitome of chaos. Other story websites that resemble Literotica's profile are far, far less visited and popular. That is why Literotica persists with all its flaws and its obvious inefficiency. It is what it is, I am afraid. 🫤
 
save for some spell checking and grammar checking
I'd suggest keeping the former, and losing the latter, which is more likely the culprit.

This all reminds me of the frustration people used to have with the emails they sent wrongly going into spam folders. Now, we all just make sure not to use all caps and six exclamation points in the subject

I think everybody's going to need to learn to change their writing style sooner or later to avoid this kind of accusation, as the detection tools improve. Surely that's not so hard to do?
 
God help the AI that has my portfolio in its training set…

Why don't you take a wild guess as to what this guy used to train his NSFW story generator?

Hey guys,

This winter, I created a tool that can generate short AI NSFW stories: *******. Would love to hear your guy's thoughts and feedback on the site!

Here's a fun little story I cooked up as an example: https://www.*******.com/history?historyID=b1089628-f2d5-4298-bbe6-8228c89b34f2

What's to stop *******, or anyone else, from using the stories here to teach their models? I'd be interested to hear from @*******. Perhaps they could explain what material they actually used to train their model? If it was Literotica, or any other prominent website, did they obtain consent to do so?

Grammarly does multiple things. It can identify spelling and grammar errors, but it can also suggest rephrases with the intention of improving style. Using it for the former is within Literotica's rules, using it for the latter is not.

Huge respect to Bramble, but this may not be accurate. NoTalentHack freely admitted that they use ProWritingAid and Grammarly, amongst other tools, to change the content of their drafts, in order to improve their "style scores" from below 50% to above 75%. NTH is on record as saying that they don't believe there's anything wrong with that, and Literotica found no reason to take action against them, which I accept and closes the matter for all concerned.

However, if what Bramble's saying is a cast-iron fact, then NTH would, clearly, be in breach of the new rules. That is undeniable.

I agree that Literotica's spectacularly botched its handling of this issue. Nonetheless, the website has a problem where different standards are being applied to different authors. Which is actually a position that some people here are proud to adopt, like the person below who wants the "beloved and successful" to be treated differently from everyone else.

If Laurel actually goes ahead and treats a beloved and successful writer like NTH the same as is implied in this thread and starts pulling all of his stories because some stupid AI-Checker tells her to... I think the outrage would be so big, we no longer had to worry about her ever doing it again.

This is also true in other ways, where authors receive preferential treatment in skipping the publishing queues and receiving same-day sweeps, when others who have been targeted by trolls have to wait over a month for the same.

Why should author A fall foul of the new rules, but those very same rules don't apply to author N(th)? All that creates is a two-tier system where fairness is at a premium.

There are multiple aspects of Literotica that aren't working properly, falsely flagged stories being just one of many. It will only get worse, in my opinion. If we take into account the common assumption that Laurel handles everything alone, there is an obvious heavy overload on many fronts. Everything we do here suffers from heavy queues, from story submissions to reporting problems and technical issues.

This is a brilliant summary of where we currently stand and it's only going to get worse.
 
OK then, but how do you prove someone used these specific features? That's the hard thing here. People have been using Grammarly for years for a basic spellcheck and to fix things.

That is indeed the million-dollar question, and I don't pretend to have any kind of answer to it.

The one I'm referring to is when it puts little red lines under your words as misspellings, or blue lines under it when certain wordage is improperly used or a word uncapitalized (such as I) and you can right click to fix it. This is grammar check that has existed in other programs, and is on other browsers. I only used Chrome as an example. This is similar to Microsoft Office Assistant with the little cute paperclip that offered writing suggestions.

I don't think that kind of use is an issue here, no. It's things like rephrasing, where it's generating words you didn't type (beyond a very simple "this word looks wrong, did you mean ...?" spell/grammar check) that the site is trying to discourage.

ChatGPT cannot write full length, proper stories, and anyone who thinks it can, has clearly never sat down and messed with it. I've been reading on this forum for 2 months since my first story rejection and accusation of using AI. Since then, I have played with ChatGPT to see what I could get it to do. It's a language based model that is predictive based on the information that you as a user feed to it. There is no way someone could write a full story with it and it be coherent and have the touch of small details that humans write. I suggest messing with it, cause the stuff it spits out as "stories" are hilariously bad. I had it write me a silly story in the voice of Duke Nukem where every other line used his stock one liner catch phrases, and the final paragraph always had some happy ending.

Oh, I've been messing around with it for ages, and I agree that it doesn't have the capability to write full stories unassisted. There are later and somewhat better models than the one powering ChatGPT, but I doubt we're going to see GPT-like tools write a coherent 10k word story any time soon.

The issue at present is more hybrid authorship, where there's a human author doing some parts of the writing but using GPT to fill in a lot of small gaps, and where necessary hitting "regenerate response" a few times until they get something sensible.

I 100% believe that these AI detection systems are junk, and if you ran older stories through them, they're going to come up with a high percentage rating of being AI detected.

My reason for believing this, is because AI reacts based on what information a human feeds it. It's human writing fed to them which it then uses to generate constructed sentences and phrases based on that. This is the big thing that people don't understand at all about AI. The detection tools are using the same rules to detect grammar and sentences written, which is why it's going to give a high percentage score on something being AI that's already been published.

Yes and no.

Tools like GPT are not intended to "memorise" the specific texts they train on; what they're meant to do is learn patterns that recur in the texts they read. This graphic is from a different area of machine learning, but the basic principle that it's showing is relevant here:

1704458267748.png
Here, we've got a bunch of data points (the dots) showing some past observations of data values at given time. Eyeballing it, we can see that there's a pattern to the data - a sort of U-shaped relationship - but also that it's got a bit of noise: individual data values might lie above or below the "U".

In machine learning, the object is to try to separate out the pattern from the noise. We want to extract that "U" shape, because that might be useful in predicting future observations - we can get a rough answer to "what would the value be at time 20?" and so on. But we don't want to memorise the noise, because that's not useful for prediction.

Overfitting happens when the model essentially memorises all the data points it observes. It's great for telling you what you've already seen, but not for predicting some future observation you haven't already seen. Underfitting happens when it's not even learning the pattern. ML modelling aims to hit that "good fit" spot in between the extremes, where it's learning the general patterns, but not specific data points. (In the case of GPT, a "data point" is a single text.)

If GPT focussed on memorising everything in its training data, it wouldn't be very useful for anything other than regurgitating the texts it'd already read. Where overfitting becomes more likely is when it sees the same text over and over; GPT has seen "in the beginning was the word" so many times that it can recite the whole of Genesis correctly.

This isn't likely to be an issue for most stories on Literotica. I wouldn't expect a GPT-based "AI detector" to see much difference between an old Literotica story that had been used to train GPT, and a new story that hadn't, because it's not supposed to be memorising the stuff that's unique to that one story, only the things that show up across a bunch of stories or other texts - in which case, a new story could well have those things too.

But it sure would be good to have actual data on this.
 
I think everybody's going to need to learn to change their writing style sooner or later to avoid this kind of accusation, as the detection tools improve. Surely that's not so hard to do?
We change our writing, the AI gets trained on the new writing, the detector is changed to flag the new writing as AI, and we get blocked, so we change it again. Rinse and repeat.
 
Why don't you take a wild guess as to what this guy used to train his NSFW story generator?
I have a dim idea how LLMs work, but thank you for putting it in simple words that I can understand better. Very helpful.

Em
 
Huge respect to Bramble, but this may not be accurate. NoTalentHack freely admitted that they use ProWritingAid and Grammarly, amongst other tools, to change the content of their drafts, in order to improve their "style scores" from below 50% to above 75%. NTH is on record as saying that they don't believe there's anything wrong with that, and Literotica found no reason to take action against them, which I accept and closes the matter for all concerned.

However, if what Bramble's saying is a cast-iron fact, then NTH would, clearly, be in breach of the new rules. That is undeniable.

I think you might have misread what NTH said about that. Unless there's something else I've missed, this is the relevant post:

I use ProWritingAid, tuned to the "Romance" setting with some custom choices in the advanced settings. It's similar to Grammarly, except that it only defaults to a sort of general business flow, but you have plenty of easy ways to customize it. I leave it on pretty much all the time, and it uses different colored underlines to complain as I do so: yellow for overly complex sentences or "sticky" words (lots of yellow in my stories), red for misspellings, blue for punctuation and grammar errors, purple for passive voice, etc. I think a lot of this is standardized; Google Docs uses some of the same scheme, and I think Word does, too.

It's a great tool, but it's like any other tool: I decide how to use it. The way that it typically rates my work, I rarely get out of the 70-80% zone for what it considers improvement. I like my sticky words, especially in dialogue. Passive voice IS the better option a lot of the time, especially in dialogue. And I usually write a lot of dialogue. So I leave it on all the time; it's good for making me aware of things as I go, so that I have to do less editing later.

When it's time to edit, I'll save a spare copy of the document, then fire up the "big" PWA tool, the one that evaluates the whole document at once. Their Google plugin (I write mostly in Google Docs) has a flaw where it doesn't handle page breaks well due to memory limitations. They have their own website, though, and for the "real" edit, I can easily transfer the doc there for a whole-body evaluation and see if I've done anything I didn't mean to: reuse of words, awkward phrasing that only becomes clear as I re-read it in full, etc.

So the short answer, I suppose, is that I use it in a limited form when I'm writing, then let it fully loose when I'm editing. But even then, I rarely don't make it "happy," just content.

AFAIK, what they're talking about there is using these tools to flag "bad" writing (as they see it), reviewing those flags and thinking about whether the criticism is valid, and if so, addressing that.

What I don't see there is any mention of using autogenerated text to address those flags. If all NTH is using them for is to identify passages that might want changing, but then using their own meatbrain to decide on what changes to make, then I don't see where that would be in violation of the rules.
 
The problem about flagging anything is this: It will only get worse.

Someone wrote that we should just learn to write in a way that does not look like the AI. Well, it makes my heart cry out for you, who change yourselves on the account of a technology not working as intended, but there is an even worst part to your idea. It will not work. It is futile. You are corrupting yourself for nothing.

AI is in its infancy still and is growing at a pace that I have not seen for several decades in computing. What you see in capability today is archaic, old news this time next year. Both in terms of techniques, technologies used and also in terms of computing capacity put behind the whole thing.

AI will catch up with us in terms of style faster than most (even inside the IT area) could ever imagine. Now will AI ever be able to mimic a human both in style and emotion? The rational pragmatic thinker in me says yes, of course. From a purely functional perspective, artists creating unique art will become obsolete as AI will be able to take whatever you imagine and put it into the art form you choose. Painting? Photo? 3D printed wall decoration? erotica? all of it. All you will have to do is to give it a verbal description of what you want, answer its follow up questions, then pick the preview your like the most. Answer a few more questions and pick from the reviewed/updated set again. Do this until you got exactly what you wanted.

This will inadvertently spark a movement of purism, where people will refuse the use of AI or the consumption of AI related products and will create a niche of human authors who still stick to the old ways, doing everything by hand. A noble exercise, but let's make no mistake. At the end of the day, it will be something purely based on feeling and belief, with no rational thought behind it and the only reason those works will feel novel would be because they are not exactly what YOU want. But then, by that time you will be able to ask the AI as well to come up with its own silly ideas. We as humans will become obsolete, if AI goes anywhere and from the looks of it, there is no stopping that train at the moment.

Kinda like the question of what soul is, if we even have one.

The technology enthusiast in me is jumping excitedly at the prospect of what's to come. The armchair philosopher and thinker on the other hand is dreading what this will turn our already horrible species into.

One thing I know for sure. We are the rocks and this is the river. Alone, we will be washed away and eventually rounded down to conform to the flow. Together we might stem the flow for a few minutes, days or even weeks. However, no matter what we do, the river will eventually win. It will find its way around us and flow freely, eventually filling the entire basin we live in.
 
I think you might have misread what NTH said about that. Unless there's something else I've missed, this is the relevant post:



AFAIK, what they're talking about there is using these tools to flag "bad" writing (as they see it), reviewing those flags and thinking about whether the criticism is valid, and if so, addressing that.

What I don't see there is any mention of using autogenerated text to address those flags. If all NTH is using them for is to identify passages that might want changing, but then using their own meatbrain to decide on what changes to make, then I don't see where that would be in violation of the rules.

I'm satisfied that when Literotica took no action against NTH, the matter can be considered closed as it relates to the rules of the website being broken. Literotica cleared NTH of any wrongdoing and I accept that, regardless of whether I agree with that decision.

To be fair to NTH, they were transparent enough to provide a series of lengthy posts describing their workflow, with screenshots of different software tools, prose and data visualisations, which they didn't have to do. While they later clarified that they don't accept every single recommendation made by the AI to alter the text, they accept as many as they see fit. In short, the software was used to make changes to the text that were not conceived of by the author.

To meet the standard of content written for humans, by humans, it's reasonable to expect the author to have written 100% of the published manuscript. Not 80%. Not 90%. 100%.

All authors should be subject to the same rules. It's irrefutable that there are authors here who have had work rejected from the publishing queue, or had published work sent back, that don't rely on as much software as NTH does. If I was in that cohort, I'd be furious because it's a clear double standard.

Part of the skillset of being a writer is to read an early draft and detect the parts which need to be polished, tweaked or entirely rewritten. By subscribing to your raft of software solutions, you're asking the tools to identify those sections instead of doing it yourself. Each time you agree to let the software tools change your work, whether it's a single word, sentence or paragraph, you're learning nothing. Your skills as a writer aren't improving. It's lazy and goes against the very spirit of what it means to sit in front of a simple word processor and hone your craft, which takes time.

The idea of spending hours upon hours drafting a story, or a novel, then letting a software tool make a variety of changes to it is embarrassing to think about. Letting the software tool seduce you into making those changes because your "style score" will jump from 55% to 75% is too embarrassing to even consider.

"Embarrassing" is how I look at it, too. I'd feel a real sense of shame if I tried to pass off AI as my own work. I suppose I can appreciate that not everyone feels that way, but I think it's a firm line, and you either "get it" or you don't.

Those who need to run their draft through multiple software tools, to even identify their mistakes in the first place, have an incredibly low level of skill. I repeat, they don't have the ability to even review their draft and identify what changes need to be made. If you can't do that, you're not a writer.

Having to generate a data visualisation of words you've overused, due to a low vocabulary and laziness, is embarrassing. Relying on the software to inform you of your "style score", then making wholesale changes, prompted exclusively by the AI, to increase that style score, is embarrassing. Those people can't write. They don't know what the fuck they're doing.

I'd be gutted if I discovered my favourite Literotica authors had ProWritingAid installed on their machine, and that they couldn't write a message on a birthday card without it.

This will inadvertently spark a movement of purism, where people will refuse the use of AI or the consumption of AI related products and will create a niche of human authors who still stick to the old ways, doing everything by hand. A noble exercise, but let's make no mistake. At the end of the day, it will be something purely based on feeling and belief, with no rational thought behind it.

This is an excellent point. Purism has a place in this discussion, but it's more relevant to discussions about a writer's talent. While I'm a huge advocate of the practical applications of AI in the workplace, healthcare and so much more, it's ludicrous to use that as justification of its inclusion in the creative arts.

When I come here to read stories, I don't want to have to sift through 200 frauds just to find someone with genuine talent. That's where we are right now.

In the meantime, I'm going to spend my afternoon playing some games on Chess.com. What I won't be doing is having Stockfish open on another tab so I can dumpster everyone with maximum efficiency.
 
Back
Top