AI Allegations Thread

My point is that few students will put time and effort into doing work and solving some problem if the solution and all the work are already available online or by making a query to the AI. It is not how they function. So if everything is available easily, how exactly can we teach them to think? How can we force them to apply their wits when all the answers lie one click away?
I know we can't really fight the progress of AI but we should start working on guidelines on how to prevent students from becoming lazy copy-paste drones. Right now, I believe no one has any clue how to do that.

It's not entirely a new problem.

A few decades back, schoolkids learned how to calculate square roots by hand. Then pocket calculators that could do it faster and more reliably than a human came along, and now the only people who learn how to do it by hand are the ones who design calculators/etc. or nerds who learn for fun. It's no longer part of the school curriculum.

Doesn't mean the students got lazier or stupider once we let them use calculators, just means we updated the curriculum to shift focus to learning the things that calculators couldn't do, and taking advantage of what they could do.

I'd be going too far if I said "AI" was just a repeat of that. The new products are faster than humans, but unlike calculators they have serious reliability issues. Maybe some of the emphasis shifts to teaching students how to evaluate AI outputs (even with calculators, a good student is asking "does that answer look about right or did I press the wrong button?") and to the problems that AIs can't handle.

I saw a piece a while back about a professor who gave students a GPT-generated essay on a historical question and asked them to identify the flaws in its "reasoning". That seems like a good approach.

I am also very skeptical about people or tools being able to spot AI-generated content in the future. AI can be taught to use slang, to insert a grammatical error or two, and to apply other methods to fake a human author. This problem needs some proper approach and prompt reaction, yet we are already behind in developing strategies and guidelines.

It can do those things. What I haven't yet seen it fake effectively is logic. AI-written stories start to show major inconsistencies even within a few paragraphs, and what I've seen on other applications suffers from similar flaws.
 
AI allegations are complete nonsense. The AIs I'm currently working with (mainly Bard) strictly avoid slurs, violence, and anything remotely explicit, making the notion of using them for writing porn absurd.
Others in this thread have proved that there are workarounds for explicit content. Perhaps the "eggplant" example upthread isn't the best, but that would just mean doing a Find & Replace.
In terms of storytelling, these machines currently produce narratives at a middle-school level at best (still surpassing much of the sewage circulating the top lists). So, why would anyone contemplate using them for story generation?

Their true usage lies not in crafting original text but in refining it. With the judicious assistance of AI, grammar, syntax, wording, and pace can be significantly enhanced while preserving one's unique tone and style. Though it entails some trial and error with prompts, mastering the process is achievable. It doesn't necessarily save time, but the end result is undeniably worthwhile.
I'm confused. You say AI can't produce anything sophisticated, but also that it can make human writing more sophisticated. Genuinely interested, because I've only ever been on the receiving end of AI-supported translation, so I don't really know what the capabilities are beyond, as you say, very unsophisticated text generation.
 
Others in this thread have proved that there are workarounds for explicit content. Perhaps the "eggplant" example upthread isn't the best, but that would just mean doing a Find & Replace.

Exactly so. I don't want to get too much into the specifics of how I prompted GPT to get those examples, but the "eggplant"/"flower"/"melons" aren't there by accident. They're a predictable side-effect of one of the things I did to get past its content restrictions. The find/replace would've been trivial to do, I just didn't want to mislead people about what GPT had and hadn't written.
 
this isn't new. Books on writing were deprecated by blog posts on writing; the information is the same within one or two degrees of freedom for modernization, it's simply the mechanism of delivery that has changed. You still need to be able to read it (or listen to it if it's narrated) in order to apply it.

A car is a mechanical contraption that (for most people) is purchased as a (one could say mandatory) method of saving time. The difference between a manual and automatic transmission makes zero difference to the primary function of a car - that is, getting you from A to B in less than a day. (I grew up driving manual cars, I owned an audi S3, and I currently drive a computerised starship because it's convenient.) Manual cars are becoming an anachronism - like piston-engined aircraft, there will always be some around, but the dominant paradigm has moved on.

School's ultimate function is to teach you to think. Leaving aside the argument as to whether that's what it actually accomplishes, when you come out of school you are supposed to possess a baseline ability in mathematics and language and some exposure to art, science, geography and history - enough to be aware that the subjects exist, which for a subset of people is often enough to nudge them on the correct path. Some people never achieve that, and for some people that's simply not something they see as important. The presence or absence of online schoolwork makes no difference to these people bar permitting them a slightly easier path through a mandated set of hoops.

At a fundamental level, I believe, AI systems don't replace people. They change the power structures and accessibility structures for some forms of creative or deductive work, and the jury is out on whether that's a beneficial thing or not. There may be some roles where AI can replace significant parts of a person's job, but IMO a smart person will leverage that in new and creative ways - say what you want, but we're a very creative species.

A big one in my industry is AI-generated code for algorithms or system integrations. It's very cool to be able to ask an AI assistant "hey, write me some code that integrates with this system" and get a wodge of three hundred lines that I didn't have to write. Except that, now, as an experienced engineer, the very first thing I do is vet it. An AI can use formal proofs to prove that a set of code is safe, but it has NO FUCKING IDEA about the rest of the system that the code will be used in - for that you need a human mind... for now ;)


My personal opinion is that there will be an entirely new industry spawned - people who are very good at spotting AI-generated content and flagging it.

At this point the genie is out of the bottle, and the best we can do is adapt.

oh. Shit. I was ranting again.
I still think some of this "progress" is turning the world to shit. As a gear head there is a difference between a manual trans and a slushbox, a car isn't just an appliance to me.
 
Don't buy into any BS people try to sell you. Stories written years ago are being removed without any valid explanation. It seems like a form of torture, and anyone who supports it is likely a sociopath. Unfortunately, there's no shortage of those among porn writers and consumers.
Well, so much for a reasonable discussion then...
 
It doesn't necessarily save time, but the end result is undeniably worthwhile.

If it doesn't save time, what's worthwhile about it?

I'm trying to understand your position once the rhetoric is stripped away, so be specific. What can a typical Lit writer, who enjoys writing and has found reasonable success here, find "worthwhile" in using AI? Because I admit, I don't get it.
 
Their true usage lies not in crafting original text but in refining it. With the judicious assistance of AI, grammar, syntax, wording, and pace can be significantly enhanced while preserving one's unique tone and style. Though it entails some trial and error with prompts, mastering the process is achievable. It doesn't necessarily save time, but the end result is undeniably worthwhile.
But that usage is exactly the kind of thing Laurel has described as not permitted. From what she's said she doesn't want to allow any AI assistance beyond spell checking and simple grammar compliance.
 
Last edited:
If it doesn't save time, what's worthwhile about it?

I'm trying to understand your position once the rhetoric is stripped away, so be specific. What can a typical Lit writer, who enjoys writing and has found reasonable success here, find "worthwhile" in using AI? Because I admit, I don't get it.
I’ll say this: I use ProWritingAid, and while I actually use very few of its suggestions (outside of obvious spelling or punctuation ones), it does force me to think harder about how I put my sentences and therefore paragraphs together. It makes me slow down when I’m revising and think “do I really want to use passive voice here?” or “I’ve used that same word a lot” or “maybe that is too long a sentence, and I should break it up.”

In a 10K word, story, I’ll take its “advice” maybe 5-15 times, depending on how much revising I’ve done during the writing process itself. But when I’m done, regardless of whether I did or not, I feel like I’ve made the best version I can at my current skill level, and for my particular style. Well, at least without a dedicated editor.

In terms of process, I tend to think of it as about at the level of a critical but not pushy alpha reader who likes my stuff in general (and therefore won’t warn me if the story itself is dumb), but who also reads it with a style guide in hand and a red pen at the ready. If I took every suggestion (or even most), my writing would be worse. But it would also be worse if I didn’t include it in the process.
 
SMDLAU doesn't seem like someone who cares about guidelines, especially with those initials.

They're using unreliable AI to detect AI; this is ridiculous, a witch hunt.

Following their logic, should I stop producing music with Ableton just because some people still prefer to jam? Should I throw away my old synthesizers? Should we discard our computers and smartphones and go back to using quills?

There's only good art or bad art. In the AI era, either adapt or step aside.


The arguments I've heard against AI on the graphic side are really just rehashes of the arguments painters used against photographers a century or so ago.
They weren't "real artists" because the machine was doing all the work.
 
This is the revised version with artificial assistance:


And this is the original paragraph:


The changes are mild but undoubtedly worthwhile.
But this is exactly the kind of thing Laurel is trying to prohibit. Grammar and spelling are one thing; this is rewriting. There's no way to reliably identify someone doing what you show here, but this extent of rewriting seems like breaking the rule against AI writing to me.
 
I like the second example better. It would have perhaps been more useful if you hadn't told us which one involved AI.

The description of the raindrop, in particular, is FAR better on the second example. Show me a weary pilgrim who actually wants a longer journey and I'll show you a robot who's never had sore feet.
 
Your approach is overly strict. This isn't for a complete rewrite but an enhancement of existing text, a process that involves multiple attempts, with the final adjustments being my own. The guidelines explicitly prohibit the generation of content by AI and emphasize the importance of originality. However, I believe that refining an existing text is legitimate.
My approach is irrelevant, I don't care particularly. But my understanding of Laurel's approach, from the messages from her that people facing rejections have shared here on the forum, is that this is the kind of thing she is trying to filter out and reject.
 
The original, to my eye, is better. The AI version is over-written - as the expression goes, "purple prose." I'd ignore the AI input, in this small example, and go with the original.
I'd change 'tired worm' to 'weary pilgrim' and leave it at that.
 
The intention is clear, but the implementation has failed.

As I've posted already, Laurel is in excellent company here. NO institution that cares about writing integrity has figured out a fair way to handle AI.

She's trying, and that's all she can do with the tools currently available. Nobody's pretending it's a perfect system.
 
IS THIS FOR REAL? This is some crazy SHIT man! Have not seen this. Although I realize ChatGBT or whatev is pretty new.

I haven't submitted anything major in the last few months, just some poems (I publish everything under other names, not this one), but I haven't seen this phenomenon nor heard about it from others.

Based on the sampling of notes I read in this chain, I think I'm hearing that if your writing style is quirky and idyosyncratic (sp), you're less likely to run into the problem.

I doubt if any respectable AI would try to write like me so I'm probably safe. 🤣🤣🤣
 
The intention is clear, but the implementation has failed. Many writers, including some well-established ones, claim they were wronged and insist they didn't use AI. Are they all lying?
I wouldn't assume any of them are lying. But you have said that you used AI assistance for the passage you shared, and that was the only passage I was talking about. It seems like you aren't understanding what I'm saying, so I'll back up a little. I have been saying in several threads that AI detectors don't work reliably, and that it's fucked up so many people are having their work pulled for AI. Some of them have shared communication they got from Laurel as we have all been trying to figure out what's causing all the rejections and how to solve it. One of those communications was asking the author if they used grammarly and saying spell check and grammar fixes are okay but not AI rewriting that it can do. Now you are here talking about using AI rewriting, and I am here warning you that that's what Laurel is looking for to reject.
 
I wouldn't assume any of them are lying. But you have said that you used AI assistance for the passage you shared, and that was the only passage I was talking about. It seems like you aren't understanding what I'm saying, so I'll back up a little. I have been saying in several threads that AI detectors don't work reliably, and that it's fucked up so many people are having their work pulled for AI. Some of them have shared communication they got from Laurel as we have all been trying to figure out what's causing all the rejections and how to solve it. One of those communications was asking the author if they used grammarly and saying spell check and grammar fixes are okay but not AI rewriting that it can do. Now you are here talking about using AI rewriting, and I am here warning you that that's what Laurel is looking for to reject.
https://literotica.com/faq/publishing/publishing-ai has recently been updated and confirms the "spelling/grammar OK, rewriting bad" position:

"With the proliferation of “smart” writing software and spelling/grammar apps in the last few years, questions arise about whether a story using these tools should be considered AI generated or human written. If blocks of text are being generated by a machine, that’s clearly AI writing. If software helps correct spelling and grammar in text that a human author has written, that’s probably not an AI generated story.

"One area of particular concern is software or apps that “rewrite” your paragraphs or stories for you. The text that results from rewriting features is definitely AI generated. These type of apps are replacing original human written text with generic AI generated text, and should be avoided when submitting work to Literotica. Readers prefer to experience your worlds and your fantasies in your own unique voice, rather than having them smoothed into a generic artificial voice available to everyone else using the same software."
 
You say rewriting, and I say refining. In any case, if I ever choose to post here, I highly doubt it will be flagged.

I tested both your examples from this post at https://copyleaks.com/ai-content-detector. On your original paragraph:

View attachment 2294056

And on the AI-assisted version:

View attachment 2294055


I don't know how accurate that tool is overall, nor whether that's the tool Literotica is using. But in this case at least, looks like those small changes would've been enough to get your text flagged.
 
Good advise. How about this:

And yet, why is human input considered legitimate while AI is not? These are my original, vivid images, and I'll employ any means to enhance them.
I think your second "I rewrote it" go, pushes it towards the AI example - see also Bramble's post below.

I think the prose in the third sample tries too hard, and you've lost the spontaneity of your plainer English first go.

I reckon one of the major issues with all of these "checker programs" is that writers lose confidence in their own natural style, and are given too many options. These programs might be great for bland, standardized business writing, but I reckon they're crap for creative writing. Whenever I see examples of "suggested improvements," I cringe, because they're nearly always terrible.
 
Back
Top