Are Stories Being Pulled for Using AI?

jaF0

Watcher
Joined
Dec 31, 2009
Posts
36,091
Seems it may be so. A couple of threads have popped up where authors say the rejection notices quote a new policy:

https://literotica.com/faq/publishing/publishing-ai
Literotica does not currently have an official comprehensive policy on Artificial Intelligence (AI). We are waiting to see how this technology develops, and getting feedback from our community, before creating any comprehensive AI policy. However, we have put together the following points on where AI fits into Literotica at this time.
  1. Literotica is a storytelling community based around the publishing of human adult fantasies. While AI tools - spellcheck, grammar tools, autocomplete, etc. - have long been used to help Literotica authors write their stories, the fantasies themselves come from the creative efforts, experiences, and fantasies of the real people who make up the Literotica community. As writing tools continue to evolve, we do not foresee a future where machine fantasies will replace the real experiences and creative efforts of Literotica’s community of human authors.
  2. Literotica explicitly does NOT grant any person or entity (commercial, non-profit, or other) the legal right to train AI on any works published on Literotica. Each work published on Literotica is copyrighted by the author. Before using any work on Literotica for any purpose (including training AI or any other AI-related use) you are required by law to contact the author to request permission to use that work. Using works on Literotica for training AI without legal authorization may subject you and your AI (and any work generated by your AI) to future lawsuits from the original author(s), Literotica, or both.
  3. We are monitoring the various ethical concerns around AI tools (some of which we have been contacted about directly from members of the Literotica Community). We plan to continue closely watching the development of AI, along with the development of public policies around AI, before creating our own official policies.
  4. Literotica’s Publishing Guidelines are clear - you must certify that you are the author of AND you own the copyright to any work published on Literotica. While simple AI tools (spelling and grammar tools, for example) do not usually interfere with an author’s copyright, there are unanswered questions around copyright when using some of the latest AI technologies that generate large blocks of text. If there are any questions about copyright related to any work you’ve used AI tools to help you create, we ask that you research and be 100% sure you own the full rights to the work before attempting to publish the work on Literotica. If you publish a work on Literotica to which you do not fully own the copyright, it may open you up to future legal repercussions.
  5. Literotica’s own use of AI is currently limited to improving the way we recommend related works to readers.
If you have questions, suggestions, or comments about Artificial Intelligence and how it might impact the future of writing, we recommend that you visit the Literotica Author Support Forum to discuss the issue with other published Literotica Authors.


The threads:

https://forum.literotica.com/thread...ere-moved-from-published-to-rejected.1593333/

https://forum.literotica.com/thread...-with-an-accusation-of-using-chatgpt.1593293/
 
What I wonder, is how can we tell if a story has used AI, or not?
Depends how heavily the author has used it. If they've just used it to get a few story ideas, or fill in a sentence here and there, nobody's going to be able to detect that. But if they're writing whole stories where AI is doing most of the work, there are tell-tales that you get to recognise.

Some of the ones I look for:
- Generic, padded wording. If I ask GPT a specific question, it will often tell me a bunch of stuff that's vaguely related to the topic but which I didn't actually ask for.
- Clichés. If I feed in prompts like "tell me a story about a person who ..." it loves to tell me about how through this experience they learned XX skills and grew personally (but in a very superficial sort of way).
- Internal inconsistency or subtle nonsense - e.g. a while back we had an example of an exhibitionist story where people had to answer the door naked and it got to this thing where the pizza guy was arriving over and over, and somebody had to answer the door naked every time. It was like it was mashing up "people keep arriving at the party, and somebody greets them naked every time" with "greeting the pizza guy while naked" without realising that it made no sense for the pizza guy to be arriving over and over.
 
Depends how heavily the author has used it. If they've just used it to get a few story ideas, or fill in a sentence here and there, nobody's going to be able to detect that. But if they're writing whole stories where AI is doing most of the work, there are tell-tales that you get to recognise.

Some of the ones I look for:
- Generic, padded wording. If I ask GPT a specific question, it will often tell me a bunch of stuff that's vaguely related to the topic but which I didn't actually ask for.
- Clichés. If I feed in prompts like "tell me a story about a person who ..." it loves to tell me about how through this experience they learned XX skills and grew personally (but in a very superficial sort of way).
- Internal inconsistency or subtle nonsense - e.g. a while back we had an example of an exhibitionist story where people had to answer the door naked and it got to this thing where the pizza guy was arriving over and over, and somebody had to answer the door naked every time. It was like it was mashing up "people keep arriving at the party, and somebody greets them naked every time" with "greeting the pizza guy while naked" without realising that it made no sense for the pizza guy to be arriving over and over.

Hmm, that would require someone to actually read the stories and not use the bot that everyone seems to assume that Laurel and Manu employ. Unless it's designed to look for that too?
 
Hmm, that would require someone to actually read the stories and not use the bot that everyone seems to assume that Laurel and Manu employ. Unless it's designed to look for that too?
My understanding was that Laurel skim-reads all submissions (very quickly) but I'm not sure if that would be enough. I agree with @RejectReality that automated detection methods are unlikely to be fit for purpose; almost any metric that a detector could use to identify AI-written content can also be used to train the AI to avoid that detector.
 
I'm probably going to sound like a conspiracy nut to say this without providing evidence, but there is a mathematical precision that underpins AI writing that should be readily detectable to other AIs, in kind of the same way dogs can easily recognize each other by sniffing their piss. The first AI can be asked to change the shape or volume of the pee, but the way it does that is algorithmic or procedural, and that leaves its mark. Some of the 'architectural' features mentioned above are easily recognizable by humans as well, like how the sentences will be fine but add up to a kind of fractal nonsense if you step back.
Yes, any automated detection system will return false positives. I believe the highest rates occur where the writer's first language is not English and who may be more likely to follow a language template of sorts, similar to how an AI attempts to do its mimicry. Or else they flat out resorted to translation software, which I'm not suggesting is a bad thing, just that it may also betray digital fingerprints, in a sense.
But there's a reasonably good chance that it will only take a few years before AIs have picked up enough of our bad habits and laziness to invisibly mimic native speech and thought.
 
But there's a reasonably good chance that it will only take a few years before AIs have picked up enough of our bad habits and laziness to invisibly mimic native speech and thought.
Probably true. And if you write cookie cutter smut or fuck by numbers, AI might become interchangeable with your content. But to keep up with some of the perverts around here, good luck with that, AI ;).
 
Internal inconsistency or subtle nonsense - e.g. a while back we had an example of an exhibitionist story where people had to answer the door naked and it got to this thing where the pizza guy was arriving over and over, and somebody had to answer the door naked every time. It was like it was mashing up "people keep arriving at the party, and somebody greets them naked every time" with "greeting the pizza guy while naked" without realising that it made no sense for the pizza guy to be arriving over and over.
Sounds like a dream. Could be done well, actually, by a human...
 
As I said in one of the threads linked, I really hope they aren't using the tools that are coming out right now claiming to be able to detect AI, because they stink on ice. The false positive rates are far higher than what they claim.
This!
I'm in a group where a few months back a woman posted hat her university assignment had been returned with a 0 score because the uni used a plagiarism detector which had indicated that the text may have been generated by AI. She was adamant she'd never used AI and didn't have an account for chat gpt or anything.

She disputed it and had a meeting with the Dean of the department where she showed him all her drafts. Fortunately she's one of those writers who makes notes, outlines, multiple drafts and saves them under new names eg Draft 1, Draft 2, Draft 12, Final, Final.2, Actual Final, etc. She also had early printouts which she'd printed to do hand edits because "you always miss stuff on screen." Suffice to say she had real evidence she'd written it.

Outcome was she got the 0 score overturned and received a proper mark for it, but the attitude of the dean was still a bit "you got out of it this time, don't let it happen again." :confused: She said she was going to look into using a screen recorder or something moving forward when she was writing assignments.
 
I told Laurel that my story had AI content when I submitted it and it went through.

I used ChatGPT to come up with the chant: Tribal Wedding Ceremony

Although not perfectly in line with the LitE guidance at the top of this thread, your case does (and to remind, IANAL) seem to fit with what the US Copyright Office has posted around works the contain some amount of AI generated material when it comes to copyright registration. You identified and it was a limited amount of the total creative work, so although the LitE guidance says the 'total copyright,' there is some flexibility so long as the majority (or near to all) of the work is "human-authored."

In the case of copyright, the registration of your copyright wouldn't protect the AI-generated portion(s).
https://www.federalregister.gov/doc...material-generated-by-artificial-intelligence

For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare's style.[29] But the technology will decide the rhyming pattern, the words in each line, and the structure of the text.[30] When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship.[31] As a result, that material is not protected by copyright and must be disclaimed in a registration application.
and
For example, an applicant who incorporates AI-generated text into a larger textual work should claim the portions of the textual work that is human-authored
 
Just ran excerpts around 8-9k characters of my stuff through some of these. All were pronounced "human written" but all my work that isn't a fantasy setting ( including unpublished WIPs ) produced a 5-15% "likely AI generated" rating. Third person narratives generated higher percentages than first person. Oddly enough, every fantasy setting story I ran through produced 0%. Find/Replace of fantasy names like changing Darkniciad to David didn't change the 0% rating.

Dropped a couple more. First person, 5%. Third person, 13%. Fantasy, 0%.

The fact that they were all declared human written doesn't give me any comfort. Obvious, repeatable patterns emerging after 15 blocks of text indicates bias in the algorithm.

These things are cash grabs and unreliable.
 
I'm probably going to sound like a conspiracy nut to say this without providing evidence, but there is a mathematical precision that underpins AI writing that should be readily detectable to other AIs, in kind of the same way dogs can easily recognize each other by sniffing their piss. The first AI can be asked to change the shape or volume of the pee, but the way it does that is algorithmic or procedural, and that leaves its mark. Some of the 'architectural' features mentioned above are easily recognizable by humans as well, like how the sentences will be fine but add up to a kind of fractal nonsense if you step back.
There's a great description of this very thing in Neal Stephenson's Anathem in which the Protagonist refers to these systems as "Artificial Inanity" - which is such a great term that I've been using it wherever possible.
 
Doesn't running anything through an AI allow it to automatically add that material to its database, allowing it to use it to improve anything it is asked to generate thereafter?

We've been issued clear guidance at work by our Rabid Attack Lawyers that anyone who submits proprietary code or documents to OpenAI (or related systems) will be cavity searched by the barbed cock of the Legal Department for this very reason.
 
We've been issued clear guidance at work by our Rabid Attack Lawyers that anyone who submits proprietary code or documents to OpenAI (or related systems) will be cavity searched by the barbed cock of the Legal Department for this very reason.
Ironic, given that one of the first guys busted for bullshit AI was a lawyer, who used it to prepare case histories, several of which turned out to be AI fabrications.
 
My AI expert team consists of Skynet, Nemesis, GLADOS, and Ultron. Or it used to, before I realized they were all the worst examples of AI and not worthy references. :(
 
Doesn't running anything through an AI allow it to automatically add that material to its database, allowing it to use it to improve anything it is asked to generate thereafter?
That's true of some checkers, especially those that are looking for actual strings, like the plagiarism tests. Others are looking for structural details that suggest mechanical construction. The latter definitely fall more toward plain old bots than true AI, and they often flag people who write in a second language or who are very formulaic as non-humans.
 
Back
Top