Bamagan
Ultima Proxima
- Joined
- Jul 3, 2023
- Posts
- 3,957
Two pennies to contribute.
If I were tasked with checking submissions for AI-generated text, I would be using one or more check-bots to give things a first pass. What I would be expecting to get from that would simply be a set of flags for individual sentences, because that's about all the AI can do right now. It writes sentences fairly well, and can manage a paragraph or two sometimes, and that's the extent of its value for analyzing other work as well. All it can do is look at a sentence and compare it to whatever its algorithms tell it a sentence 'should' look like and return some kind of probabilistic answer about whether or not it would write that sentence more or less the same way.
So, I'd be presented with a text where the bot has highlighted sentences that it loves (5/5 would write this again). It's not infallible, obviously, but it's a place to start a more comprehensive analysis. If there are very few flagged sentences, or they're sporadic, I probably wave it on through with minimal further review. If they're all over the place, and whole paragraphs are highlighted, I might reject it with minimal further review. I'm expecting a fair number of false positives, of course, so perhaps I look at some dialogue or a passage of 10-15 paragraphs to see if I get any clear intuitive impressions, because at that point it's just a question of whether my instincts outweigh the other considerations. And for any works where, say, 30%-60% of the sentences are flagged, it's almost entirely instincts, with perhaps a dash of historical evidence to weigh if the author is a known element.
If I were tasked with checking submissions for AI-generated text, I would be using one or more check-bots to give things a first pass. What I would be expecting to get from that would simply be a set of flags for individual sentences, because that's about all the AI can do right now. It writes sentences fairly well, and can manage a paragraph or two sometimes, and that's the extent of its value for analyzing other work as well. All it can do is look at a sentence and compare it to whatever its algorithms tell it a sentence 'should' look like and return some kind of probabilistic answer about whether or not it would write that sentence more or less the same way.
So, I'd be presented with a text where the bot has highlighted sentences that it loves (5/5 would write this again). It's not infallible, obviously, but it's a place to start a more comprehensive analysis. If there are very few flagged sentences, or they're sporadic, I probably wave it on through with minimal further review. If they're all over the place, and whole paragraphs are highlighted, I might reject it with minimal further review. I'm expecting a fair number of false positives, of course, so perhaps I look at some dialogue or a passage of 10-15 paragraphs to see if I get any clear intuitive impressions, because at that point it's just a question of whether my instincts outweigh the other considerations. And for any works where, say, 30%-60% of the sentences are flagged, it's almost entirely instincts, with perhaps a dash of historical evidence to weigh if the author is a known element.