Chat GPT

If you look at the site's robots.txt, they disallow crawling by everyone.

https://literotica.com/robots.txt
I may be misunderstanding something, this isn't my field, but I've looked at that file before and I didn't see anything in it that would prevent a crawler from scraping stories. There are a bunch of "disallow" statements which apply to all crawlers, but they seem pretty specific, mostly about stopping crawlers from voting/commenting/reporting stories.

Curiously, there are a couple of lines which block indexing on one particular story (which has been pulled by the author anyway):
Code:
Disallow: /stories/showstory.php?id=494486
Disallow: /stories/showstory.php?id=494486&page=2

But I don't see anything that would cover stories in general.

There are a bunch of category-level disallows like this:
Code:
Disallow: /s/celebrity-stories-c

But I'm not sure those URLs with a "-c" at the end actually exist on the site. They don't match the actual URLs for the category hubs, which look more like
Code:
/beta/c/lesbian-sex-stories
. I'd guess they might have been temporary development/test pages that they didn't want indexed.

Compare to this other site's robots.txt which has a whole lot more *s in it.

Is there something I've missed?
 
Firstly, it's coming and no power on Earth can stop it, this much is true.

But you write from an astoundingly narrow perspective: March 2023 to be precise, and from the perspective of AI 1.0. And yes, currently the AI doesn't have "the ability to write a worthwhile story, not above the level of a nine-year-old, anyway." But AI 2.0 will write to the level of a 12 year old, and 3.0 will get us up to a 15 year old - you see where this is going?

Is this a statement of knowledge or a statement of belief?

So much of the prognostication I see about "AI" seems based on extrapolation, assumptions that because the technology has improved, it will continue improving at the same rate for the next few decades. Progress rarely works that way.

In particular, equating stages in AI progress to stages of human development is a trap. No "AI" technology has a skill profile remotely similar to any human's, they don't follow the same developmental milestones that we could expect from humans, and improvements often come with trade-offs - for instance, GPT is an impressive conversationalist, but in some areas less capable than my 1980s-vintage pocket calculator.
 
Is this a statement of knowledge or a statement of belief?

So much of the prognostication I see about "AI" seems based on extrapolation, assumptions that because the technology has improved, it will continue improving at the same rate for the next few decades. Progress rarely works that way.

In particular, equating stages in AI progress to stages of human development is a trap. No "AI" technology has a skill profile remotely similar to any human's, they don't follow the same developmental milestones that we could expect from humans, and improvements often come with trade-offs - for instance, GPT is an impressive conversationalist, but in some areas less capable than my 1980s-vintage pocket calculator.

It is a prediction, of course. If I had an infallible crystal ball I'd be a lot richer than I currently am. However, I've just done a quick google search for an article I noticed earlier this week regarding AI being used to specifically fool humans by a couple of researchers who wanted to show how easy it was to do, even with the current capabilities. Didn't find the particular article but I did find a wealth of other articles and links to studies that show exactly that. The AI is already here and doing it. Now, if we think about that logically, we have to question why, once we're all aware of how easy it is to get the AI to create written work, anyone would pay an actual person to do it when they can simply get the AI to do it for them. It might not be perfect right now. It might not be perfect in 20 years from now. But if it does the job 95% as well as a human, for free, then that's the option that people will choose most of the time, and to hell with the other 5%.
 
But if it does the job 95% as well as a human, for free, then that's the option that people will choose most of the time, and to hell with the other 5%.
But right now, it doesn't get close. Every piece of AI fiction I've read (examples given here, as well as other sources), declares itself within half a dozen sentences - repetition, cliche, indeterminate and imprecise sentences, dumb and non-sensical phrases. No threat for a while, I'd say.
 
But right now, it doesn't get close. Every piece of AI fiction I've read (examples given here, as well as other sources), declares itself within half a dozen sentences - repetition, cliche, indeterminate and imprecise sentences, dumb and non-sensical phrases. No threat for a while, I'd say.

Perhaps, however it's got to the point that Clarkesworld closed to all sci-fi submissions three weeks ago as they've been flooded by bot written stories. This doesn't mean those stories are any good (their submissions are up something like fourfold year on year in the first quarter this year), but what it does mean is that those authors who aren't using a bot are as equally closed off to a highly reputable outlet for their output as the chancers who are using AI. As yet there doesn't seem to be a clear resolution to this, and Neil Clarke has gone on record that he wants to support actual, real authors. But will they be able to, or will they just have to bite the bullet if they want to keep publishing?

This from the NY Post:

Clarke refused to explain how he was picking out the new AI-made stories, saying he had “no intention of helping these people become less likely to be caught” and that the magazine’s process will “have to change” to deal with the new tools at the disposal of fraudsters.

He suggested a number of changes that might occur, such as limited submission windows or even asking authors to provide more contact information – or even outright refusing a submission that has a masked or VPN-covered location. Still, he viewed such options as either “short-lived” or said they could make it too difficult for new authors, especially those in international markets.

“If the field can’t find a way to address this situation, things will begin to break,” Clarke concluded.
 
It is a prediction, of course. If I had an infallible crystal ball I'd be a lot richer than I currently am. However, I've just done a quick google search for an article I noticed earlier this week regarding AI being used to specifically fool humans by a couple of researchers who wanted to show how easy it was to do, even with the current capabilities. Didn't find the particular article but I did find a wealth of other articles and links to studies that show exactly that.

All the examples I've seen have been short samples. In my experience, the longer a conversation goes on, the more the limitations of the AI become visible, and those limitations are not the kind that are going to be fixed by fine-tuning the current model. Can it write a sex scene that some readers will get off on? Undoubtedly, and I expect there'll be readers for that. But longer works where internal consistency matters, that's going to require something different before full automation is possible.

The AI is already here and doing it. Now, if we think about that logically, we have to question why, once we're all aware of how easy it is to get the AI to create written work, anyone would pay an actual person to do it when they can simply get the AI to do it for them.

By and large, we don't pay actual people for it now. Probably 99% of the fiction being written today is being written not because somebody's willing to buy it, but out of the love of writing. Even professional authors are usually making vastly less money than they could be making in some other job (probably with a better healthcare plan, for the US ones).

Paid outlets do exist, but nothing I've seen from GPT comes close to what would actually be required to get published at somewhere like Clarkesworld. As I understand it, the issue there isn't that there's a risk of AI-written stories being published, just that the volume of crappy AI submissions was DOSing their editorial process. And Clarkesworld's 12c/word isn't lucrative; you can probably make more money using the tech to run scams.

Cat Valente has a good essay on it. There are a few bits I'd nitpick, but:
Sure, a new player has entered the game. It can and will compete. Companies may prefer it to we, the yachtless who need benefits. It can and will get ugly. But oh my god, people won’t stop writing or creating or performing, and they won’t stop coding, either, not the ones who love it and are passionate about it, certainly not because AOL Instant Essayist can, too. That shit is compulsive. From hands on a cave wall to these words on this screen, we cannot stop trying to express ourselves, and if one thing about our dumbfuck monkey dance on this call of salt will never change, it’s that. The unending plaintive scream of people trying to connect, to be heard, to be seen, to be known, to take what is inside us and make it manifest on the outside.

It’s just how our code is written.

Ain’t no program ever made that can cure human narcissism and make any one of us shut the fuck up. Thank god. This beast can only exist alongside us, it cannot replace us, because we will keep doing frivolous shit, and we will need others to see it to feel alive.
 
Back
Top