More rejects or more AI?

iwatchus

Older than that
Joined
Sep 12, 2015
Posts
1,345
I have been thinking about this for the last few days. No filter for keeping out AI will be perfect.

Should the site err on the side of keeping out AI, with the cost of some false accusations?
Or should the site let more AI slip in to allow real writers to submit dubious works?

It is going to be one or the other. (To some extent it will be both.)
 
I prefer the site keeping all the bot-glop out, and if there are some legit submissions that she's suspicious of and declines, so be it.
Although I do kind of wish they'd set up a dedicated forum for the rejections instead of sending them to the AH...
 
Although I do kind of wish they'd set up a dedicated forum for the rejections instead of sending them to the AH...
Essentially, Laurel has delegated the job of answering all these questions - AI, underage content, pending times and so on - to the AH. I consider it the price of having this forum. Laurel doesn't have to take time out of reviewing stories, and there's generally someone here who has time and energy to answer.
 
No "AI" works. Period, full stop. I support whatever policy or initiative keeps out computer generated slop. There's plenty of other places on the internet you can go for that crap.
 
The site is a Common Carrier and has no liability for what is posted. If it errs, it does so for other reasons.

Exactly, I agree with keeping AI out, but doing so is based on choice. If someone wants to open a site and allow AI there would be no liability in a legal sense.
It's whatever the market demands, and I don't think the general public cares much who writes the stories. The AH cares a great deal, but we aren't representative of the general population.

But, to the core of the original question, is it worse to imprison an innocent man, or allow a guilty man to go free?
The reality is there are AI generated stories up on the site right now. We've discussed the flaws in AI detection and it would be foolish to think it only errs in one direction.
So, how many legit stories get rejected to keep an AI story out?
It's easy to say, "as many as it takes" when it isn't your story getting rejected.
But what's the cost to the site? How many potential authors take their ball and go home because of the frustration of dealing with that?
There's certainly a cost.
 
I fully support the strict no-AI policy. But I think Laurel needs to offer more guidance to all those whose work gets rejected.
I'd say that the number of false positives is far from zero, and that belief is supported by our collective conclusion here that free online AI-detection tools suck. So it makes sense to doubt that Laurel's method of detection, whatever that is, is highly effective and precise.

I support keeping it strict, but at least offer something, some way for those affected to resolve their potential false positives rather than putting the burden on our shoulders here.
 
But, to the core of the original question, is it worse to imprison an innocent man, or allow a guilty man to go free?
Nobody's being imprisoned. Nobody's having anything taken from them. All that's happening is that they're denied the opportunity to publish their story. They're no worse off than they were before. They can find somewhere else to publish their story.
But what's the cost to the site? How many potential authors take their ball and go home because of the frustration of dealing with that?
There's certainly a cost.
The number of stories published every day seems to be going up. I don't think the site is terribly worried about losing authors, particularly not authors whose writing could be confused with AI-generated work.
 
Nobody's being imprisoned. Nobody's having anything taken from them. All that's happening is that they're denied the opportunity to publish their story. They're no worse off than they were before. They can find somewhere else to publish their story.

The number of stories published every day seems to be going up. I don't think the site is terribly worried about losing authors, particularly not authors whose writing could be confused with AI-generated work.

It's a philosophical question. Not everything is literal.
It's easy to be dismissive when it isn't your work being rejected.

Since we don't know what is getting rejected why would we assume that "mistaken for AI" is somehow synonymous with "not good"?
 
I know what it's looking for, and I know what it means when it finds what it finds.

If maintaining the walls meant I was the one out in the cold, I'd be pretty broken up about it. That's why I try to help petitioners, but I'm just one person and I don’t have any real power.

I don’t believe most of them (because I know what I know) but a few are genuine. Currently, given the volume I percieve on either side, I can't say I'd do any different.

All I can say is that I support what the site is doing.
 
I think the site will err on the side of protecting its own liability by doing everything to keep out AI. This also aligns with the site's philosophy of being a place for erotica created by humans.

As I've perhaps mentioned once or twice, I support this stance entirely.
It's not just the sites liability and reputation, it's ours as individual contributors. If Laurel slacks off to err on the side of letting AI through, it diminishes all of our work.

Getting falsely accused is a pain, there's no arguing that. There are, however, remediations available. You can appeal or rewrite(heaven forbid you change your style, though).

If the site starts accepting AI work, that damage is irreversible. As @StillStunned points out, this is a safe haven for site for human created works. once we openly accept AI, it stops becoming that and we all might just as well post everything on Reddit.
 
I fully support the strict no-AI policy. But I think Laurel needs to offer more guidance to all those whose work gets rejected.
[...]
I support keeping it strict, but at least offer something, some way for those affected to resolve their potential false positives rather than putting the burden on our shoulders here.
I don't believe this is possible beyond the steps the site has already endorsed.
 
It's a philosophical question. Not everything is literal.
It's easy to be dismissive when it isn't your work being rejected.

Since we don't know what is getting rejected why would we assume that "mistaken for AI" is somehow synonymous with "not good"?
If you're good enough to write like AI, I wager you're good enough to not write as AI. Either that, or you're truly unfortunate to have learned the exact rhythm of writing AI uses and consistently use it. I don't know what they use to detect it, but I assume that you need a certain amount of AI rhythm before you're outed. As StillStunned says, it might be better to remove too much with the scalpel than too little.

I'm on the fence though. I think there's legitimate cases how AI can be used. However, current AI is near certainly all sourced unethically and unlawfully, and the creative process is likely to be more on the AI side than directed by a human. This is why I think it shouldn't be allowed. A clear statement among hopefully many that the current form of AI is not okay.
 
I don't believe this is possible beyond the steps the site has already endorsed.
I understand you're basing this on your presumed knowledge of how their detection system works, but I remained unconvinced.
I mean, maybe you did acquire some insight, and maybe your assumptions are spot on, but the website's long-standing laziness and serious lack of communication with its users make me doubt that they did all they could to cover all the bases.
 
If you're good enough to write like AI, I wager you're good enough to not write as AI. Either that, or you're truly unfortunate to have learned the exact rhythm of writing AI uses and consistently use it. I don't know what they use to detect it, but I assume that you need a certain amount of AI rhythm before you're outed. As StillStunned says, it might be better to remove too much with the scalpel than too little.

I'm on the fence though. I think there's legitimate cases how AI can be used. However, current AI is near certainly all sourced unethically and unlawfully, and the creative process is likely to be more on the AI side than directed by a human. This is why I think it shouldn't be allowed. A clear statement among hopefully many that the current form of AI is not okay.


I just think there are too many unknowns to judge the quality of work as being "AI like" because we don't really know what the tool is looking for.
Your scalpel analogy is apt, I just think we shouldn't lose sight of the fact that someone is getting cut. There seems to be a strong sense of, "I've got mine, and screw everyone on the outside looking in."
I want to keep AI out as well, but we need to minimize the harm to everyone.
Obviously none of us have any actual say in how this site works...
 
[...]but the website's long-standing laziness and serious lack of communication with its users make me doubt that they did all they could to cover all the bases.
I am currently attempting to cover the gap as best I can.
 
Last edited:
I hope they can continue to keep AI written stories away from Lit, but I fear the battle will be lost sooner rather than later.

AI has taken huge leaps in short time and detectors won't be able to keep up. I'm sure the day that AI is able to create human-like fictional content, indisquinciable from human, is not far. I hope I'm wrong, and afraid I'm not.
 
I am currently attempting to cover the gap as best I can.
You are, to your credit. I'm sure there are authors whom you helped resolve their issues. I'm just saying it should have been Lit doing something about it, not you. Or they could have at least provided some more help and guidance to reduce the load on your back, and on the backs of all the others here who tried to help. It feels like our goodwill in this case is being somewhat exploited by Lit.

Don't get me wrong, I understand that a good deal of this mess comes down to our individual perceptions. I first came to the forum full of admiration for Lit, and I remember searching for ways to donate to the site. But over time, my impression of the site changed a lot. I still see it in a positive light, but I'm not blind to their shenanigans, too.
 
I say this all the time, but I'm not sure how understood it is. My writing is therapy. I am better because of this platform. I believe in it, and I have dedicated many hundreds of hours to helping others make use of it (though your mileage may vary on how effective I've been). I have no problem volunteering time and effort to bear some of the burden of that particular conversation.

I won't always be the only person who knows. I'm just glad it was me first, and that I can contribute something because of it for some fellow authors.

EDIT: (not a plea for adulation. Don't need propping up)
 
Last edited:
Part of what got me thinking about this was the How To for new authors I am working on. What do people think about this excerpt for new authors. This is in the section of having your story sent back.
You are accused of using AI to help write your story. The site requires 100% human written stories. It might be a true accusation. Did you use a tool like Grammarly to rewrite the text of your story anywhere “to fix the grammar and punctuation”. That is using AI. Did you write the story in your native language and have an AI translate it to English? That is using AI.

Sometimes the AI rejections are false. No one but Laurel and Manu knows exactly how the AI check works. It is clearly not one of the free on-line AI checkers, so don’t use that as proof you didn’t. Proving you wrote the story is hard. Sometimes, someone more experienced can look at the story and suggest things that might reduce the things that the check is seeing. But don’t ask someone to help you if you really did use AI. There are sites that allow AI stories. Better yet, try to struggle with the writing until you can do it on your own. But don’t whine about it. That’s the way this site is and nothing you are going to do is going to change that.

Thoughts on this?
 
Part of what got me thinking about this was the How To for new authors I am working on. What do people think about this excerpt for new authors. This is in the section of having your story sent back.


Thoughts on this?
The less said, the better.

Write your own story. Edit your own story, or enlist the services of a volunteer editor.
 
The less said, the better.

Write your own story. Edit your own story, or enlist the services of a volunteer editor.
I do say all that elsewhere. Some of the goals of the How To is to divert some repeat questions from AH and to set expectations of new authors appropriately. At best, this will impact things around the fringe; I have no delusions of this being a panacea.

Now that it is you commenting, I have to include the other mention of AI in the how to (in a section discussing common questions given to AH)

<I>Can I use AI?</I> No. Not in any way. Pushing about this one is likely to make you enemies and some of the regulars intoxicated. (There is a running joke about having to drink when the same question gets argued over and over again.)
 
Back
Top