onehitwanda
Venatrix Lacrimosal
- Joined
- May 20, 2013
- Posts
- 4,383
Not like we have much of a choice, is it.I wonder; Are we okay with Laurel (or other mods) feeding our prose into AI detectors during the vetting process?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Not like we have much of a choice, is it.I wonder; Are we okay with Laurel (or other mods) feeding our prose into AI detectors during the vetting process?
Short of publishing elsewhere — not really.Not like we have much of a choice, is it.
My speculation is that they do not use an LLM AI checker. Despite the faith some people place in them, I think they are minimally more useful than throwing darts. As some of the disputes over whether a given passage is or is not AI according to the tools om one of these AI threads, maybe this one.
I suspect they do use some simple heuristic checker to flag suspicious writing that Laurel gives a more careful reading to. Because she is still making the ultimate choice, it is alright to have some false positives.
She has scan read something like a million stories by now. When she is in moderation mode, her head has to be in a place that none of us can fully understand. I was in a similar place (different universe) regularly back in my coding days. It's almost like you interact with that universe like you do with the everyday world. You process lots of things and filter them down before you are even consciously aware of them. You have an equivalent of peripheral vision, noticing things you aren't really paying attention to.
I can't imagine she can process the number of stories she does without getting into that zone. And how many stories she has processed, it would be surprising if she did not develop that relationship with the writing.
We do not know exactly how many stories she processes a day, but the stats for ho many are publishes is not hard; @NotWise seems to track it regularly. Recently, it has been reported that about 200 stories a day get published. That does not count any stories that were rejected. Or any edits that are submitted. We know both of those are non-zero. Are we talking 250 reviews a day or 400? I have no idea.
I assume that she recognized red flags (and I assume that the set of red flags continues to evolve) that raise alarm bells about the writing. Manu coded those up and tested them on subsets of stories until they thought they mostly worked. Anything that gets flagged presumably gets more careful attention from Laurel. This is basically the same process I assume for automated checks about age of characters. If something spends more than two or three days in pending without getting approved, it probably tripped some flag. It does not mean she will ultimately reject it. If it spends weeks waiting, it may mean she is torn about accepting it. Or she set it aside and forgot about it, which definitely seems to happen.
I have no inside information; unlike some here, I have never had a meaningful interaction with Laurel (or Manu) and certainly nothing about any of this. But I have built a bunch of software systems and this is how I would imagine it working. YMMV. But cut them some slack, folks. I think this is a labor of love for them. And they are certainly not out to screw any one of us invidually. I know it hurts to be told no. But that's life.
well said.My speculation is that they do not use an LLM AI checker. Despite the faith some people place in them, I think they are minimally more useful than throwing darts. As some of the disputes over whether a given passage is or is not AI according to the tools om one of these AI threads, maybe this one.
I suspect they do use some simple heuristic checker to flag suspicious writing that Laurel gives a more careful reading to. Because she is still making the ultimate choice, it is alright to have some false positives.
She has scan read something like a million stories by now. When she is in moderation mode, her head has to be in a place that none of us can fully understand. I was in a similar place (different universe) regularly back in my coding days. It's almost like you interact with that universe like you do with the everyday world. You process lots of things and filter them down before you are even consciously aware of them. You have an equivalent of peripheral vision, noticing things you aren't really paying attention to.
I can't imagine she can process the number of stories she does without getting into that zone. And how many stories she has processed, it would be surprising if she did not develop that relationship with the writing.
We do not know exactly how many stories she processes a day, but the stats for ho many are publishes is not hard; @NotWise seems to track it regularly. Recently, it has been reported that about 200 stories a day get published. That does not count any stories that were rejected. Or any edits that are submitted. We know both of those are non-zero. Are we talking 250 reviews a day or 400? I have no idea.
I assume that she recognized red flags (and I assume that the set of red flags continues to evolve) that raise alarm bells about the writing. Manu coded those up and tested them on subsets of stories until they thought they mostly worked. Anything that gets flagged presumably gets more careful attention from Laurel. This is basically the same process I assume for automated checks about age of characters. If something spends more than two or three days in pending without getting approved, it probably tripped some flag. It does not mean she will ultimately reject it. If it spends weeks waiting, it may mean she is torn about accepting it. Or she set it aside and forgot about it, which definitely seems to happen.
I have no inside information; unlike some here, I have never had a meaningful interaction with Laurel (or Manu) and certainly nothing about any of this. But I have built a bunch of software systems and this is how I would imagine it working. YMMV. But cut them some slack, folks. I think this is a labor of love for them. And they are certainly not out to screw any one of us invidually. I know it hurts to be told no. But that's life.
Wait. What? From you?We should all, also, take a step back from guessing in a public forum.
Over the years I've had almost every kind of rejection Lit has. Did you know Lit used to reject stories if you had a word that is more than 42 characters long? I suspect that specific rejection type went out with the site redesign, but if you were here 5+ years ago you remember the narrow reading column. I have had maybe 15+ rejections, always for different reasons.Wait. What? From you?
Have you had conversations with Laurel? I realize that’s a possibility since Laurel chooses to speak through a chosen few rather than directly communicating with the community.
So Yes or no, are you in contact with Laurel and do you have direct information? It’s a fair question.
AwkwardMD says she thinks it's a homegrown system, not a commercial one like Originality, Sapling, or Copyleaks, though I've heard those mentioned on sites like Reddit as being the system used here.
What a remarkable piece of fan fiction.My speculation is that they do not use an LLM AI checker. Despite the faith some people place in them, I think they are minimally more useful than throwing darts. As some of the disputes over whether a given passage is or is not AI according to the tools om one of these AI threads, maybe this one.
I suspect they do use some simple heuristic checker to flag suspicious writing that Laurel gives a more careful reading to. Because she is still making the ultimate choice, it is alright to have some false positives.
She has scan read something like a million stories by now. When she is in moderation mode, her head has to be in a place that none of us can fully understand. I was in a similar place (different universe) regularly back in my coding days. It's almost like you interact with that universe like you do with the everyday world. You process lots of things and filter them down before you are even consciously aware of them. You have an equivalent of peripheral vision, noticing things you aren't really paying attention to.
I can't imagine she can process the number of stories she does without getting into that zone. And how many stories she has processed, it would be surprising if she did not develop that relationship with the writing.
We do not know exactly how many stories she processes a day, but the stats for ho many are publishes is not hard; @NotWise seems to track it regularly. Recently, it has been reported that about 200 stories a day get published. That does not count any stories that were rejected. Or any edits that are submitted. We know both of those are non-zero. Are we talking 250 reviews a day or 400? I have no idea.
I assume that she recognized red flags (and I assume that the set of red flags continues to evolve) that raise alarm bells about the writing. Manu coded those up and tested them on subsets of stories until they thought they mostly worked. Anything that gets flagged presumably gets more careful attention from Laurel. This is basically the same process I assume for automated checks about age of characters. If something spends more than two or three days in pending without getting approved, it probably tripped some flag. It does not mean she will ultimately reject it. If it spends weeks waiting, it may mean she is torn about accepting it. Or she set it aside and forgot about it, which definitely seems to happen.
I have no inside information; unlike some here, I have never had a meaningful interaction with Laurel (or Manu) and certainly nothing about any of this. But I have built a bunch of software systems and this is how I would imagine it working. YMMV. But cut them some slack, folks. I think this is a labor of love for them. And they are certainly not out to screw any one of us invidually. I know it hurts to be told no. But that's life.
Judging by everything I've seen here, it's highly unlikely that Lit has its own homegrown algorithm for AI detection. An effective algorithm of that type would take an impressive amount of cutting-edge programming skill and knowledge, as well as resources and a lot of time for development and testing. I'm pretty sure that's not the case here.
The context of this seems to be directed at the post immediately before where you’re saying “no AI checker”. Not entirely clear.This is an understandable, but unfounded, fear
You imply others are guessing.We should all, also, take a step back from guessing in a public forum.
The word you won’t use is “yes”. Yes you did yes you have. Just say it for gods sake.…
I have not communicated with her directly about the AI checker. I have communicated with her about it indirectly. I suspect that she wouldn't tell me even if I was right because what I think I know relies on a kind of black box design. It works because nobody understands it. If they did, the checker could be circumvented.
Just because you ask a question, fair or not, doesn't mean you're entitled to an answer.So fair question number two: which is it? Pick only one and stick with it please.
Was I taking to you?Just because you ask a question, fair or not, doesn't mean you're entitled to an answer.
You're posting on a public forum. That means you're talking to anyone who cares to reads it.Was I taking to you?
The context of this seems to be directed at the post immediately before where you’re saying “no AI checker”. Not entirely clear.
I am.Your “this is it” was more clear. You’re implying no AI checker.
You imply others are guessing.
The word you won’t use is “yes”. Yes you did yes you have. Just say it for gods sake.
This is a conflation of terms that I am guilty of. Yes, I believe Lit has a system it uses to detect AI. No, I believe it is not GPTZero, or any of the LLM-derived services commonly described as "AI Detectors". I have tried to be consistent, but I am constantly bumping up against elements of Lit's system that I know I can't explain and that makes it hard to grapple with or have a direct conversation around.And the black box checker, your words, contradicts the implications you made earlier, steering people from believing there is an AI checker. Both sets of things you said cannot be true at the same time.
So fair question number two: which is it? Pick only one and stick with it please.
And aren’t you stealing someone’s job?You're posting on a public forum. That means you're talking to anyone who cares to reads it.
I appreciate this reply. Nobody is asking for revealing secrets, but just acknowledging you have some secret information is progress.No. I was referring to the fear that Lit is feeding every submission into an LLM.
I am.
Please have patience with me. It is a difficult subject to have a conversation about. I believe I posses privileged information. Discussing it directly risks exposing things I didn’t mean to expose, and that puts the entire site at risk. I am walking an extremely fine line saying anything at all, but I feel its worth it try to A) help most of us feel comfortable submitting, and B) help the fringe rejections.
This is a conflation of terms that I am guilty of. Yes, I believe Lit has a system it uses to detect AI. No, I believe it is not GPTZero, or any of the LLM-derived services commonly described as "AI Detectors". I have tried to be consistent, but I am constantly bumping up against elements of Lit's system that I know I can't explain and that makes it hard to grapple with or have a direct conversation around.
I first discovered what I discovered in June of 2024. I talked to several AH members about a month later in private (SimonDoom, EB, and AwkwardlySet) in a kind of panic because I didn't know how to talk about it all but felt compelled to all the same, for the same reasons listed above. There's a lot of confusion around the subject, a lot of understandable fear, and I am trying to help allay that.
I am not the perfect spokeswoman, but I am doing my best.
This is the wayI’ll reiterate my own position. I don’t like AI. It is built on theft. Don’t use it. Don’t support it. Scroll past AI summaries when you use internet search. Turn of autocomplete in your word processor and email editor and everywhere else.
It’s not about being backward. It’s about standing up against theft, including of your ideas.
I was involved in AI many years ago, when it basically didn’t work. In earlier times it was based on the pipe-dream of AI products having their own intelligence, not on stealing everything off of the internet. It didn’t work. Now it kinda sorta works in certai ways, but only ways that parlayed its use of training data taken without permission.
@Matt4224, which pieces of those novels did you use?I tried to reproduce your report that 1984 was qualified as AI. I took three different segments of the text from the Gutenburg copy and put them through three different detectors. They all registered as 100% human.
I tried with "For Whom the Bell Tolls," also using the Gutenburg text. I used three different sections of text from different parts of the story and put them through ZeroGPT. The results were 100% human in every case. Ernest would probably curse me for checking.
You don't have to turn on those assistants.I recently had a story rejected for using AI. It's a false positive: ie I didn't use AI.
However, it's a bit hard to suggest one isn't using an app that has AI assistance these days. Word has CoPilot, Apple Pages has Apple Intelligence, and Grammarly is not uncommon. For years, many writing apps have been offering simple grammatical assistance for better sentence structure.
I worry about assistance-creep (my way to describe it). Now with AI so ingrained, does that make suggestions a risk? I think if it just addresses the sentence structure, like spelling, or missed punctuation, that seems ok. But what if it suggests your sentence is too informal? Or to sound more casual? Seems like a grey area to me, because the idea is still my own, but the influence may be impactful. Does that make it collaborative? This is where my discomfort lies.
BTW, when I got the rejection recently, I oscillated between "do you think it is that well written?," and, "wait, is my writing only as good as some AI?" I guess I didn't know what to think, except I didn't use one.