AI Allegations Thread

hey if my story gets posted and people like it maybe I will write a sequel for the challenge lol
The good news is that apparently it's taken some writers months to get their stories posted. You might get lucky and have your story entered in the contest anyway!
 
wait what is this about months? Is that for stories rejected for ai? I know about that issue but if mine gets rejected for that I ain't trying for months to get it accepted! I'm actually hoping my 900 word gem sails on through with no problem. That would be like the second best gift I could get from Santa lol. The best gift would be if my favorite author here who has gone quiet starts posting again so I can read a new chapter of their story. I read over all the threads and don't see any comments from them here so hopefully they are not having this problem and are just busy
Don't worry, I was exaggerating. Mostly. There have been a couple of writers who really struggled to get their stories through, but mostly it's a matter of days.
 
thanks for this. I'm trying to ignore this dude since his buddy called me abusive lol. Also I have never used the chat bots and don't want to but the examples nth shared were pretty interesting. Some people here say the ai writes crappy but they are obviously advancing and there's no reliable detector to weed out what is ai and what is human. This guy and ddx or whoever are obsessed with anyone using ai so I just want to say again I DON'T. So don't report my story like a fucking hall monitor dude lolol

I'm amused by how far off you are. No need to duck me. I've got a point of view, but I'm also curious about how people are rationalizing this to themselves. I hear similar justifications at work, and they intrigue me there too.

My question was genuine, but it was designed to make a point: you can (as you know, and I'm not saying you don't) do a better job than AI, but yet people are trying to probe its capabilities to make it "work better." I was pointing out that that's exactly what Laurel is worried about, and it's likely to be the reason she's taken such a hard line.

One of the things that surprises me more than a little bit is that the pro-AI side don't seem to have any qualms about discussing all this on a public thread. I would think that there'd have been a "fight club" mentality developing by now, lol.
 
I'm amused by how far off you are. No need to duck me. I've got a point of view, but I'm also curious about how people are rationalizing this to themselves. I hear similar justifications at work, and they intrigue me there too.

My question was genuine, but it was designed to make a point: you can (as you know, and I'm not saying you don't) do a better job than AI, but yet people are trying to probe its capabilities to make it "work better." I was pointing out that that's exactly what Laurel is worried about, and it's likely to be the reason she's taken such a hard line.

One of the things that surprises me more than a little bit is that the pro-AI side don't seem to have any qualms about discussing all this on a public thread. I would think that there'd have been a "fight club" mentality developing by now, lol.
There's no point in a fight club mentality. I haven't done anything wrong, in my opinion. If something I write gets bounced by the filter, I'll resubmit it; if it gets bounced again, I'll go write elsewhere. I have no idea what will trigger the detector, and I doubt, honestly, anyone does, all the way up to the folks selling it to the site. ML is notoriously hard to understand, because of the way it works; it gets trained on stuff, then it kind of does whatever it thinks it's supposed to do, and it becomes... not entirely a black box, but call it a grey box? It's not as bad as it used to be, but it's still relatively impenetrable. Just look at the examples I gave before about some of the other places it's failed as a detection tool.

On the generation side, it acts wonky, too; that's part of what's going on with the weird eyes and fingers in Stable Diffusion. But sometimes it's more subtle. I was using img2img, a SD tool that will take an image and let you tweak it (change its art style, turn a brunette character blonde, etc.) and trying to make a female character wearing a uniform more tan. But in a big chunk of the results, she ended up having a bow on her chest; it took me a minute, but I realized a big chunk of the training images that were tagged with "tan" or "tanned" AND woman were probably women in bikinis, and it was doing what I had asked. But it doesn't actually understand what that word means, just the images it's associated with.

Another, more subtle one, showed up on social media for a while. Pictures of people eating mostly work fine, or at least only a little weird, but pictures of people eating spaghetti showed them shoving it in their mouths, food all over their faces, and even plates or bowls on their heads. A bunch of folks were like "WTAF," but I recognized the cause almost immediately: they'd included a bunch fo pictures of kids (along with pictures of everything else) in the training data. When do you take pictures of kids eating spaghetti? When they're making a mess of themselves, because it's cute.

I talk about it because I've talked about it for years. I'm not afraid of it, but I also acknowledge it's going to change so, so much. I'm quite open about where and how I do and don't use it. I don't see any reason not to; certainly not the fear that I'm going to be banned from a site where I'm not paid to produce content. That's not a threat or anything; I like writing here, and I like the community. I'm immensely grateful for almost everything related to Lit. But I'm also not going to hide from the future.
 
So why the hell do you care so much if nth uses ai tools? If it doesn't make his writing better than yours why do you give a shit? Again I will repeat for the couple of witch hunters that I DO NOT USE AI.

You need to scroll up and re-read my posts.

I don't care if anyone uses AI, in general. Laurel does. She matters more than either you or I do. And I DO care if I enter a contest with something I wrote, and lose money to a person who did not. That would irk me, though not all that severely.

My job is HEAVILY affected by AI, and in a way I spend my working life fighting its entire thesis. I think almost everyone's job is threatened by AI; my real problem with it is that it's likely to erode critical thinking, and that's a shame. We're already pretty stupid, as a group, and I have little need for us to get even stupider. It amazes me that so many folks are so willing to plunge so cheerfully into a technology that's so likely to destroy their world.

Although, that's well beyond the scope of this thread, or of Lit in general. My problems with AI go substantially farther than simply worrying about its effect on free smut.
 
None of us will report you. But if your comments here seem to suggest that you're using AI, the moderators or even Laurel might look at your stories.
I WROTE IT! I wrote my story and it even has a moral to it which I was definitely not expecting lol. I even got it over 900 words whew! I will read it over and then send it in. I'm gonna turn the comments back on too because I am a little bit of a masochist lolol. Don't anybody report my story for ai either!
 
I figure it is still a good idea to declare my innocence lolol. Just about everybody trying to post a story is probably saying the same thing but to be fair my story sounds like my comments here so it should be detected as shitty human writing - though not too shitty to be posted because I mean you can understand me lol - and not ai but we will have to see.

Chin up. It'll be aiiight, fam.
 
So why the hell do you care so much if nth uses ai tools? If it doesn't make his writing better than yours why do you give a shit?

Because quality isn't the only possible benefit of using AI and software tools. Those who do are also able to produce work on an industrial scale, as both NoTalentHack and MourningWarbler did, saving huge amounts of time and energy in the process.

NoTalentHack explained that his own recent contest entry was only possible because he relied upon his raft of software to help create and edit his completed draft. Then, he approached the website's administrators to have his entry added at the last minute.

That's an example of AI being used to tip the scales in an author's favour. Without his software, NoTalentHack wouldn't have completed his editing process in time to enter the contest. If he had to rely on his own abilities and take the time to copy-edit his own work, to consider possible adjustments and how they'd impact the flow and style of his document, he'd have been late.

On the question you asked about "giving a shit", that's the exact same phrasing that NoTalentHack used themselves. You both think the same way, and that's fine. But if you don't understand why some writers are so appalled by the very notion of AI-generated content being passed off as an author's own work, being used to cut so many corners, being used to drag an author's "style score" from the trenches to the stars at the touch of a button, being used to compete against their peers who are left at a disadvantage...

Even if there are those who would stop short of describing NoTalentHack's use of software as cheating, there's no doubt that anyone relying on those tools is embarrassing themselves within a community of creative writers and artists. How could you look your peers in the eye?

On the balance of probabilities, Literotica called MourningWarbler a cheat, and I agree with that.

However, on the balance of probabilities, I believe that NoTalentHack is also a cheat.
 
And, people at work tell me it ain't right, because they can't write a prompt, therefore they oppose me, instead of learning.

I respect your point of view, but you're not learning anything in that example.

Your argument boils down to, using your example, jumping onto Stack Overflow and taking someone else's code to fix an immediate problem.

In that scenario, which happens every day, the programmer doesn't fully understand why the code works. All they care about is that it satisfies their immediate need.

Nonetheless, creative writing isn't programming.

Part of the skillset of being a writer is to read an early draft and detect the parts which need to be polished, tweaked or entirely rewritten. By subscribing to your raft of software solutions, you're asking the tools to identify those sections instead of doing it yourself. Each time you agree to let the software tools change your work, whether it's a single word, sentence or paragraph, you're learning nothing. Your skills as a writer aren't improving. It's lazy and goes against the very spirit of what it means to sit in front of a simple word processor and hone your craft, which takes time.

Learning takes time, whether you have access to digital resources or paper-based ones. Humans have to take the time to learn about different parts of the overall creative writing skillset. That's how you improve, grow and get better.

Learning is not sitting back in your gaming chair and hitting 'accept' while a piece of software makes a raft of wholesale changes to word choice, grammar, punctuation, style and everything else. Changes that the author has nothing to do with and doesn't understand, which is why they employ the tools in the first place.

Your argument is the opposite of learning. It separates you from reviewing your own work, from understanding why it needs to be tweaked, polished or entirely redrafted.

Later I would ask the internet to just give me the info and months became days.

Information isn't learning, just as information isn't knowledge.
 
My question was genuine, but it was designed to make a point: you can (as you know, and I'm not saying you don't) do a better job than AI, but yet people are trying to probe its capabilities to make it "work better." I was pointing out that that's exactly what Laurel is worried about, and it's likely to be the reason she's taken such a hard line.

If you've been following the discussion, you should know that the question came up in the context of refuting claims that people can reliably tell apart AI-generated from human-written text. To paraphrase, a commenter was describing "telltale signs" of AI, like even sentence length, no obscure words, and no misspellings, and claiming that they followed naturally from how these systems work ("predicting the most likely string of words"—a description so oversimplified as to be dangerously misleading), and the counter was that no, that's just the default setting, and you can change the writing style with a suitable prompt.

BTW, nobody yet responded to the implicit claim that just because these are the characteristics some people associate with AI-generated text, they are necessarily the characteristics that trigger AI-detectors. I don't think anyone has done any kind of reliable or systematic test to verify that, and we know from other contexts that the way computers "think" about a problem is often very different from how humans think, even if both reach similar results. And in any case, tests have shown those detectors to be bunk.
 
I tried for 3 decades to learn how to write. Something was missing and I couldn't work it out. With AI I made it write thousands of pages, readable versions of my ideas till I understood how it would look like. Then I asked the AI for the opinion on the text I wrote. "Use more show, less tell". How to do that? "Write: 'He trembled and backed away ' instead of telling he was afraid". Use body language, with examples, use dialogue tags for more variation. Use different long sentences. "In this paragraph, too many sentences start the same, try to rewrite them." And of course, instead of a paper dictionary I use dict.leo.org or an online thesaurus to find things in seconds. So, I write all my texts myself, in word and I get instant feedback about spelling and gramer. I use a tool to tell me about shortcomings of my story. And now people using paper and human editors tell me I am a cheat.

I would be careful with asking AI tools for a critique of your work. They are designed to give an answer that sounds like the kind of answer a human might give to a critique request; they may be influenced by some characteristics of your work, but they're not really designed to analyse what you're trying to achieve with it and think about how your writing choices might support or undermine that.

There are times when "show, don't tell" is the wrong advice. There are times when repetition of sentence structure is the right choice. AI is unlikely to be good at recognising when those times are.

Many years back, the US Air Force was trying to figure out the average measurements of a pilot. There are about ten measurements that are important to designing an aircraft cockpit - arm length, leg length, yada yada - and their grand plan was they'd figure out what the average was for each of those measurements, design their aircraft for the average pilot, then recruit pilots who matched those averages.

What they found was that nobody was average. Every pilot had some quirks that didn't fit with the Average Human setup. So they abandoned the idea of designing and recruiting to the average, and instead worked on making things adjustable so they could fit the unique quirks of whoever was flying the plane at the time.

(Or at least, that's how I heard it, and I'm not fact-checking this story because it's a convenient metaphor ;-)

AI critique is generally going to give you a kind of "average human" guidance. That doesn't mean you can't learn from it - evidently you already have learned things that work for you! - but it's important to challenge the advice it gives: is it right for my story? Some folk are just a bit too willing to take the computer's advice as gospel.
 
I respect your point of view, but you're not learning anything in that example.

Your argument boils down to, using your example, jumping onto Stack Overflow and taking someone else's code to fix an immediate problem.

In that scenario, which happens every day, the programmer doesn't fully understand why the code works. All they care about is that it satisfies their immediate need.

This is perhaps too much of a generalisation. We don't all learn in the same way.

I've copied a fair bit of code from SO in my time, and for me it's been an effective way to learn stuff. For whatever reason, I work better with taking a concrete example where somebody has Done The Thing, looking through it step by step, and thinking about how that works, than with reading the docs and trying to turn that more abstract information into practical understanding. Once I've adapted somebody else's example to my own purposes, then I'll go read in the docs and fill in whatever knowledge I didn't get from that SO snippet. But it doesn't work if I try to do that first.

By my reading, Ben has a similar learning style for writing technique. That may or may not be compatible with Literotica, but it doesn't mean it's worthless for him, even if there are limitations to how much one should trust an AI's writing advice.
 
However, on the balance of probabilities, I believe that NoTalentHack is also a cheat.
You know, I don’t think you ever did answer the question of which tool you’re using for your evaluations. Any chance you’ll be a bit more up front about that? Give the “condemned” a chance to confront their accuser? Because I have a feeling it’s about as accurate as this:

1702036956221.jpeg

That image is part of an article at https://arstechnica.com/information...-think-the-us-constitution-was-written-by-ai/ as is this:

A 2023 study from researchers at the University of Maryland demonstrated empirically that detectors for AI-generated text are not reliable in practical scenarios and that they perform only marginally better than a random classifier. Not only do they return false positives, but detectors and watermarking schemes (that seek to alter word choice in a telltale way) can easily be defeated by "paraphrasing attacks" that modify language model output while retaining its meaning.

"I think they're mostly snake oil," said AI researcher Simon Willison of AI detector products. "Everyone desperately wants them to work—people in education especially—and it's easy to sell a product that everyone wants, especially when it's really hard to prove if it's effective or not."

Additionally, a recent study from Stanford University researchers showed that AI writing detection is biased against non-native English speakers, throwing out high false-positive rates for their human-written work and potentially penalizing them in the global discourse if AI detectors become widely used.

So. Wanna give us a crack at it? I would guess that if you put the vast majority of the site’s content through your tool of choice, it would find fault with it.

Also, now that I understand the way these detectors work, I feel kind of great about having been flagged.

Going back to your original attack on me, your tool cited a section of After the Future is Gone as having been 81.1% likely to be AI generated. This one:

1702037585602.png
Which is mostly dialog. Given that I try to write people as they talk, instead of as speechifying machines—simple words, unaffected phrasing, and so on—I would expect it to have very low perplexity, as it’s described in that article.

And, again, its burstiness is probably going to be low, too; people have a number of short exchanges, very occasionally interspersed with longer ones that ARE longer because they’re discussing more complex topics.

So. Thanks for verifying that I make people sound like people. Like I said, can I get the name of that tool? I can add it to my “raft of software.” Make sure my characters aren’t talking too high-falutin’.

It’s funny, though; until June, the only software I used was Google Docs, and I’ve actually slowed down since then. Also, like the student mentioned in the article that had to defend his dissertation, I can provide the history log of every one of my stories, because I wrote them in Google Docs.

Now, part of why I slowed down is RL stuff, and part of it is that my ADHD that compelled me to write A LOT at the beginning (along with convincing my idiot self that I could keep up that pace and setting some goals that I’m amazed I just barely squeaked out) made it harder to focus as I got over the New Hobby Energy. But part of it is that I’m editing more, thinking more about word choice, etc., even when I’m writing in my phone and don’t have access to said “raft.”

If I were using AI to generate my stuff, especially, as you’ve suggested, from the beginning, wouldn’t my output increase instead of decrease? Strange. Could it be that you’re… wrong? That you’re talking out of your ass, in fact?

One last note: I may be a bad writer. I may even be a terrible one. I’ll take criticism from peers on that, although, not to toot my own horn, I’ve mostly received praise. I’ve discussed the craft of writing both in the general and the specific with almost everyone I’ve highlighted in my afterwords, plus others on the forums, Discord, and in emails. I’ve received awards on this and other sites. And I’m quite clear about my process, too. I have no problem looking any of my peers in the eye.

But you aren’t my peer. They’ve got skin in the game. You don’t. Even Lovecraft and Tilan, who I have gotten into it with a lot, get my respect for putting themselves out there. So, while I’ll entertain their criticism, yours holds no water.

Get to publishing, Deputy Fife, and give us the name of your detector so I can help you understand how ridiculous your tool and accusations actually are. Maybe then I can look you in the eye, too, instead of down on you.
 
Last edited:
I WROTE IT! I wrote my story and it even has a moral to it which I was definitely not expecting lol. I even got it over 900 words whew! I will read it over and then send it in. I'm gonna turn the comments back on too because I am a little bit of a masochist lolol. Don't anybody report my story for ai either!
Huzzah!!!
 
I pulled all my stories and wrote the reason in my comment. There is no message stronger than that.
It's a bummer that you felt you'd had enough, but you're the one who's lost out. Seems to me you've just shot yourself in the foot, rather than sent a message.
 
what on earth lol his story got rejected FOUR times after he sent a message. As W. C. Fields said "If at first you don't succeed, try, try again. Then quit. No use being a damn fool about it."

oh wait my bad I am getting him confused with the person who was rejected FOUR times after sending a message. I still think the Fields quote applies since from what I can tell this is his second story in a row to get rejected after he spent a shitload of time fixing something that ain't broken
I guess it boils down to how much he wants to get his writing out there. Not as much as those who persevered, clearly. @Portly_Penguin hung in there, and now he's got stuff published, so that's where my sympathy lies.
 
My writing had a couple of purposes. One was to prove to myself that people would read them. I had close to 200k readers with my stories. That some would like them and the technique is valid. All but one scored well over 4*, several had a red H and had 190 favorite marks. Only one complaint about some tense mistakes. I gathered 78 followers in 4 months or so. And, I have found my style and am content with my technique now.

But beside those points, one thing was most important to me: to balance my day job, to relieve stress. That was no longer given. And, reading how others faired, I would have just waited to find stories pulled that were already published, creating even more stress. Now they are gone and I don't have to think about that anymore. Now I wait. Once the AI witch hunt is over, there is a good chance I start publishing again.
Fair enough, that's a valid reason.

I still think you should have left your existing stories up, though. Now you've got to start over. But I get it, degrees of importance, huh? :)
 
I did the first time. And I was angry and I used time to fix a thing that was not broken. This time I used a lot time before submitting. I won't waste any more time. Especially since I use writing as a valve to cope with my stressful 9-5 job. I pulled all my stories and wrote the reason in my comment. There is no message stronger than that.

Your position is completely at odds with that of Literotica.

The website wants content written by humans, for humans. They've been clear on the issue and they're entitled to refuse to publish your work.

Isn't it interesting, however, that all these people on the forum who are quitting the website, after being flagged for suspicious content, have all tried to downplay the offence of using AI and software tools to contribute to their final drafts?

I've spent a considerable part of my life writing for a variety of purposes. If my work, which took an incredibly long time to complete, was challenged for any reason as being unfit, I'd defend it. Full-throatedly. The last thing I'd do is pull all my contributions and run away to hide, without offering a substantial defence or clarification, like BenETrate and MourningWarbler did, because that creates the impression that I'd be attempting to conceal the truth.

Once the AI witch hunt is over, there is a good chance I start publishing again.

What you really mean by that is you'll only return once the website stops testing for what you've been accused by Literotica of using. Like the professional athlete who'll only return to competition once they stop testing for performance-enhancing drugs. What a scandalous approach to take.

For those who still don't understand this issue, this is what it boils down to. Every publisher will use software to detect content that was produced by AI or software tools. Either in full or in part. No tool can give an accurate score of 100%, so the balance of probabilities is the only way a positive result can be interpreted.

Literotica's red line may be 80%. Amazon's may be 75%. Others may be 70%. All that matters is those publishers can, and will, refuse to publish whatever work they believe to be in breach of their own rules.

This thread has been taken over by those who believe that using AI and software to contribute towards their own stories is no big deal. That it should be allowed to run rampant. However, it's cheating, and it's plain to see that not one of the elite writers here would stoop so low as to rely upon software to create their amazing work.

I look forward to Literotica trimming the fat around here, where terrible writers, relying upon their raft of software, get shown the door in disgrace. The only thing more hilarious is that there are writers here who think that installing the likes of ProWritingAid somehow transforms them into a talented author.

I'd actually pay to sit and watch those frauds try and write a story on a blank Word document. To see the beads of sweat form on their brow as they realise, quickly, that they don't have the first clue what they're doing.

NoTalentHacks, indeed.
 
Back
Top