Interesting AI story in the Telegraph

The main takeaway for me in all this, though, is actually the self-defeating nature of the authors who've used AI to create their works. Because, whilst that might get them a level of support now, how long before the penny drops and 'every Tom, Dick, and Harry' realises that they don't need them? It just goes to show, that just because you can author a work it doesn't make you smart.
I guess that depends on why you write. If you write to show people how great of an artist you are with words, then yes. Using AI defeats the purpose and can feel like cheating. If however you consider the true value of your writing to be the thought and ideas behind your words, then the AI can be an invaluable tool in helping you convey those thoughts and ideas to others better.

I see the artistic aspect of the act of writing. The word smith is a craftsman just like many other and great skill can yield admirable results. For me however the words are mostly just tools to convey the thought and it is the thought I care about more, so I don't mind those words being "artificial", as long as the thoughts underneath are original.

Coincidentally, those thoughts will for a long time be something the AIs cannot really reproduce. They can try and emulate, but not with true originality, thus the human factor will remain in writing.
 
Reading this thread is... 'interesting'...

A couple of basic things first, though. Firstly, synths and cameras didn't remove human involvement from the creative process. These tools cannot produce anything on their own. A synth doesn't have a button to press which composes a concerto, a camera doesn't set itself up, choose the lighting, background, etc and come back with a Pulitzer Prize winner. An AI writing bot is qualitatively different in that it will come back with an entire piece of work from a single prompt - 'write me a story about pirates, with a cheeky parrot and a fight with a sea demon.' And bam! There is something, for better or worse. To use prompts well will involve a level of skill, true, but nowhere near the skill required to actually produce the body or work that the AI will now produce. The end result is that the human increasingly becomes far more removed from the creative process than with they ever did with synths or cameras.

Thus, to compare synths or cameras or trains or calculators to an AI writing bot is to compare apples to oranges.

Secondly, yes, this is here now and it isn't going away. But there are massive questions:
. Hell, I have to deal with it on a different level, with students who now can't understand why they need to write anything when ChatGPT can do it for them, not understanding the purpose in the slightest.

I remember only a year ago discussing this on another thread and predicting the very things that are now being celebrated, by and large, in this thread. Then, I was largely pooh-poohed and told that AI won't replace writers, don't be so negative and panic-mongering, etc. Well, it gives me no comfort to have seemingly been more right than wrong. And look, if you're happy with AI taking over creative writing from humans, then great, good for you. Because that is what will happen. Maybe not this year or the next, but soon, human authored work will become like the bespoke option in the clothing industry - an expensive little corner that supplies a vanishingly small number of customers.


You have some interesting points. It makes me think of how people and society in general feel about music - there are lots of different tastes and ways people enjoy it.

For parties and dancing some prefer recorded music, others live, others like mix djays. I generally prefer live music- usually folks getting together with voice and acoustic instruments, yet most of the music I listen to is in the car while driving.

I can deeply appreciate prerecorded music, but few things are more attractive than a beautiful performance by a beautiful person in the same room.

Authors attract fans, will an AI ever have adoring admirers? šŸ¤”
 
The literal, exact same thing was said about photography in the mid-1800s when it was introduced.

And yet painting is still a vibrant art form. It's not used much for portraiture anymore, because photography does it better, faster.

Painting was freed by the requirement to reproduce reality, and became a medium to explore abstract and non-representational themes.

Photography did not kill painting. Painters responded by moving to things that photography can't do. Now they live side-by-side, neither is dead.

And continuing the case of photography, every time an advancement is made that made/makes it easier to practice photography, it was hailed as the end of photography. Those advancements just made it easier for people with less means to do it.

Comparing this to writing, before home computers and the internet, if you wanted to get published, you had to go through the industry and write what they wanted to publish. Now with access for everyone, sites like Literotica exist where the barrier to entry are practically zero (compared to the past). Now there is a new technology that lowers that barrier even more, and people feel threatened that their work means less now.

What this really means is that more people are able to express themselves. What made your efforts special yesterday, are now seemingly worth less today because any Tom, Dick, and Harry can now do this with less effort.

The mistake is that because it's easier that your effort means less. You still have to come up with the ideas and choose what matches your idea best.
Let's just agree to disagree on this "creativity" aspect of this problem. But I am curious to hear your opinion about the other aspect of using AI, the ethical one. AI doesn't truly create anything. It just gives you back the content it had been trained on. This isn't a contested point here, it is a well-known and universally accepted fact as far as I know.

I am going to create a parallel with something we encounter from time to time in AH threads - some user wishing to continue the story of some author without having the consent of that same author. We then often advise that, if they can't obtain that consent, they write a new story "inspired" by the original one. The user has to change the plot and setting significantly in order to avoid being a plagiarist. That is not the case with using AI right now. AI is literally just spewing back the content it had been trained on. AI isn't just "inspired" by that content and it doesn't create anything new because it is not designed to do that. It is literally giving you back the verbatim content of what some other author or authors wrote, in a way. So the content it produces is plagiarized content by definition.

Now, how do you feel about that? Are we suddenly okay with plagiarizing too, just because it is impossible to figure out who the plagiarized author is?
 
. AI is literally just spewing back the content it had been trained on.

This is not true.

While AI has been fed existing works and much of what it produces is based on that material it is not limited to simple regurgitation.

Human artists of all stripes are influenced by what they have seen and experienced. It’s very rare that someone creates something entirely unique. Music is a perfect example with all of the theory, rhythm and meter. Art and music are taught by learning existing techniques and by studying the classics. The difference with AI is that it has an indelible memory and can mimic with perfection.

AI is often criticized for one of its greatest strengths - to ability and propensity to hallucinate. It is notorious for making mistakes. It is also programmed to ā€œself learnā€ based on feedback from these mistakes.

The fastest way AI receives feedback may be through Internet ā€œcickbaitā€. It is constantly learning what people are attracted to, and what keeps their attention.

What is Art ? If you define it simply as human expression, then sure AI will never be able to produce real art, but if you consider Art to be something that can inspire thought and emotion in humans then AI is already producing novel art, and commerce is probably the single biggest driving force.
 
Last edited:
This is not true.

While AI has been fed existing works and much of what it produces is based on that material it is not limited to simple regurgitation.
Actually, that is exactly what AI does. It works by "predicting the next word" in a sentence. ChatGPT writes a story by using math. It calculates, with blazing speed of course, the probability of a certain word being used next to another word in a specific sentence, all of it based on the AI parameter set, obtained from training the AI on terabytes of material written by humans. Maybe you feel there is some actual creation involved in the process. I don't. I see it as a statistical approach of a sort, but not a creative one.
 
Actually, that is exactly what AI does. It works by "predicting the next word" in a sentence. ChatGPT writes a story by using math. It calculates, with blazing speed of course, the probability of a certain word being used next to another word in a specific sentence, all of it based on the AI parameter set, obtained from training the AI on terabytes of material written by humans. Maybe you feel there is some actual creation involved in the process. I don't. I see it as a statistical approach of a sort, but not a creative one.

So what role does feedback play in its development?

What effect do mistakes have on the feedback it receives?
 
Let's just agree to disagree on this "creativity" aspect of this problem. But I am curious to hear your opinion about the other aspect of using AI, the ethical one. AI doesn't truly create anything.
It's a tool that outputs based on how you prompt it. It's not inherently creative. The creativity is from the person operating it.
Now, how do you feel about that? Are we suddenly okay with plagiarizing too, just because it is impossible to figure out who the plagiarized author is?
It's not plagiarizing anything any more than you are when you learn and write in the style of another author.

The advice to be a better writer is to read, read, read. You learn how other writers do it, and you emulate what works. LLMs just do it much faster and with a wider library.
 
So what role does feedback play in its development?

What effect do mistakes have on the feedback it receives?
Now that is a question that would require in-depth knowledge of its algorithm. I am assuming that the algorithm backtracks and then at some point uses the second-best prediction instead of the best prediction it previously used. Also, by providing feedback you add more data to the initial prompt, thus making the algorithm branch differently. This is of course based only on my general knowledge of programming. What is generally known is that these algorithms themselves aren't spectacularly complicated. They don't usually have more than, say, a thousand lines of code. It is all in the parameter set, which is generated in the long and very costly learning process that makes all the difference in this case.
 
Right.

While the owner of literotica does not allow Ai generated stories on the site, they themselves use AI as a tool to screen submissions.

To me this is evidence that their objection to AI content is more about aesthetics and/or legality than any ethical concerns about lost employment opportunities.
Think you've missed the point. I think the site is otherwise AI agnostic.

What they care about are filtering out specific content, including anything that another entity could claim ownership over. That's it.

If submissions began to spike after AI tools became widely available, they had to figure out a process to handle the increase with the same amount of resources. My bet was that they always ran it through a plagiarism checker before they started scanning for AI.

Then every single story has to be human reviewed to ensure there isn't any bestiality, underage activity, etc. No program can scan for that because machines suck at deciphering innuendo, oblique references, etc.

So given all the above, the new norm is that if your work can't pass an AI pre-screen then no one is going to read/review it for all the other things. Just like a resume without a cover letter will get discarded because it didn't fit the submission requirements.

Until the number of new submissions drop off and there is a noticeable lack of new submissions that can clear the first hurdle of an AI check, there is zero incentive for the site to change. Instead I hear complaints about the time it takes to get through the submission queue being too long.

Hilly, the owner of CBGB, only allowed bands that played original songs because he couldn't afford to pay the ASCAP fees for bands playing covers. So a lot of bands never played CBGB, but everyone who did had a unique voice/sound and exposed people to a lot of original material.

The rule had nothing to do with hating/discriminating against cover songs. It was about finding a viable model that worked for that club and kept the doors open.

The rejection of AI flagged content at this site is likely an effort to keep the doors open to encourage people to keep writing erotic content that doesn't have a lot of other outlets and serve it up to an audience that was built out of thin air because of the way they structured submissions/approvals/publishing.

There have to be other outlets for AI assisted or generated content, just like there were plenty of venues for bands to play covers.

i can understand that people are frustrated by rejections. I can sympathize with their hurt/anger if they've never used any program to edit their content for grammar, etc. What I don't understand is the vitriol and incitement to leave this site, on an authors forum, by those who think they should be able to use AI if its accepted business, education or some other venue.

Does hand-carved include using chainsaws. I don't know. If I rented a studio space to hand carve, I wouldn't want others using chainsaws in the next studio if it was against the site rules. Just as I wouldn't want my own submission delayed while someone who had to read 100 other submissions that were produced/edited by AI. Neither do I want to pay a submission fee to cover a more expensive process for reviewing work here.

More than anything, I'd hate for this place to get shut down because they were sued by an AI Software company for infringement. Or because bad content slipped through because they couldn't keep pace with the rate of submissions and trusted a program to scan for banned content.

If/when my own submissions start getting rejected, I will have to decide how much effort I'm willing to put into editing to pass through the submission process. Until that point I am firmly in the camp that says the site has a right to do whatever it feels it has to in order to keep the lights on and the content flowing.

I'm becoming increasingly hostile to those who want to subvert that process, or discourage participation here because they can't have their own way.
 
Me:
Write something intended to provoke inspiration and a sense of wellbeing.

ChatGPT:
In the vast canvas of life, each day is a brushstroke. Embrace the palette of possibilities, paint your journey with resilience, and let the vibrant hues of hope create a masterpiece of inspiration.

Me:
What existing works inspired your previous response?

ChatGPT:
My response is a creative synthesis inspired by various expressions of motivational literature and positive affirmations. It's a unique composition generated based on patterns and styles observed in such content.

Me:
Write something with a similar goal based only on feedback you have received for other novel content.

ChatGPT:
In weaving the tapestry of encouragement, your feedback has been the loom, threading positivity into the fabric of my responses. Your insights are the spark, igniting a constant evolution towards crafting words that resonate with inspiration and wellbeing.

šŸ¤·ā€ā™€ļø
🤣
 
Last edited:
Think you've missed the point. I think the site is otherwise AI agnostic.

What they care about are filtering out specific content, including anything that another entity could claim ownership over. That's it.

If submissions began to spike after AI tools became widely available, they had to figure out a process to handle the increase with the same amount of resources. My bet was that they always ran it through a plagiarism checker before they started scanning for AI.

Then every single story has to be human reviewed to ensure there isn't any bestiality, underage activity, etc. No program can scan for that because machines suck at deciphering innuendo, oblique references, etc.

So given all the above, the new norm is that if your work can't pass an AI pre-screen then no one is going to read/review it for all the other things. Just like a resume without a cover letter will get discarded because it didn't fit the submission requirements.

Until the number of new submissions drop off and there is a noticeable lack of new submissions that can clear the first hurdle of an AI check, there is zero incentive for the site to change. Instead I hear complaints about the time it takes to get through the submission queue being too long.

Hilly, the owner of CBGB, only allowed bands that played original songs because he couldn't afford to pay the ASCAP fees for bands playing covers. So a lot of bands never played CBGB, but everyone who did had a unique voice/sound and exposed people to a lot of original material.

The rule had nothing to do with hating/discriminating against cover songs. It was about finding a viable model that worked for that club and kept the doors open.

The rejection of AI flagged content at this site is likely an effort to keep the doors open to encourage people to keep writing erotic content that doesn't have a lot of other outlets and serve it up to an audience that was built out of thin air because of the way they structured submissions/approvals/publishing.

There have to be other outlets for AI assisted or generated content, just like there were plenty of venues for bands to play covers.

i can understand that people are frustrated by rejections. I can sympathize with their hurt/anger if they've never used any program to edit their content for grammar, etc. What I don't understand is the vitriol and incitement to leave this site, on an authors forum, by those who think they should be able to use AI if its accepted business, education or some other venue.

Does hand-carved include using chainsaws. I don't know. If I rented a studio space to hand carve, I wouldn't want others using chainsaws in the next studio if it was against the site rules. Just as I wouldn't want my own submission delayed while someone who had to read 100 other submissions that were produced/edited by AI. Neither do I want to pay a submission fee to cover a more expensive process for reviewing work here.

More than anything, I'd hate for this place to get shut down because they were sued by an AI Software company for infringement. Or because bad content slipped through because they couldn't keep pace with the rate of submissions and trusted a program to scan for banned content.

If/when my own submissions start getting rejected, I will have to decide how much effort I'm willing to put into editing to pass through the submission process. Until that point I am firmly in the camp that says the site has a right to do whatever it feels it has to in order to keep the lights on and the content flowing.

I'm becoming increasingly hostile to those who want to subvert that process, or discourage participation here because they can't have their own way.

What you wrote above seems like a further extrapolation of what I said, no?

ā€œTo me this is evidence that their objection to AI content is more about aesthetics and/or legalityā€
 
There is a belief among some that the human skills required to develop the prompts which would allow a generative AI application to properly "create" a piece of literature should not be ignored. They view effective and efficient prompt writing as a creative process of its own.
I think this is kind of true right now - there are even courses teaching people how to write good prompts.
I see those a bit like the 'computer literacy' classes that the generations before mine sometimes need to keep up with developments.

Our children are the first 'digitally native' generation - they take constant wifi and online availability for granted.
Their children will be 'AI native' and never even consider it controversial at all.
 
Immediately upon this 'author's' admission of fakery, the award should have been revoked, right there on stage at the very moment, any physical items and/or certificates taken back from her.
 
What you wrote above seems like a further extrapolation of what I said, no?

ā€œTo me this is evidence that their objection to AI content is more about aesthetics and/or legalityā€

In many ways, I agree it is. And thank you for advancing the conversation in a way that I hoped to expand and clarify. I felt like equal weight given to those two theories of the site objection to AI didn't fairly represent what is likely driving the current policy on the site.

I think legality definitely applies, but I really don't think its an aesthetic judgement at all. Leaving that out there as an equally plausible explanation keeps the door open to the Luddite themed objections being raised here.

The legality aspects are much more compelling than it feels you give them credit for in your and/or statement. I also don't believe that the use of AI tools to screen for AI is hypocritical at all. I don't know if that's the point you were trying to imply, but it read that way to me. I think they are an imperfect tool being employed to navigate an otherwise unmanageable situation.

I don't believe there's an option to discard the filtering, manage the increased level of submissions and screen for all of the content that all of us agreed were not allowed on the site when we submitted.

I keep trying to make this case in a way that asks others to view the submission process objectively. In hopes that it helps them see beyond how they feel about technology, rejection, creativity or exclusivity.

Apologies if I misunderstood your post or wasn't clearer in my response.
 
If it means more learners of English can produce comprehensible essays so you can establish whether they've understood a subject and can continue to teach more, and they can learn English faster by seeing example sentences about their field, I'm all for it.
If only that was what's happening... my experience is that students are using ChatGPT (and others) simply as a shortcut, a way of getting something else to do the work so they don't have to. I see this when they have to complete an assignment in class and, on reviewing the results, I am presented with barely comprehensible crap, in contrast to the lovely essays I am sent when the task has been set as homework. And then, when questioned on the same subject as the homework, the majority of the students are pretty clueless.
 
In many ways, I agree it is. And thank you for advancing the conversation in a way that I hoped to expand and clarify. I felt like equal weight given to those two theories of the site objection to AI didn't fairly represent what is likely driving the current policy on the site.

I think legality definitely applies, but I really don't think its an aesthetic judgement at all. Leaving that out there as an equally plausible explanation keeps the door open to the Luddite themed objections being raised here.

The legality aspects are much more compelling than it feels you give them credit for in your and/or statement. I also don't believe that the use of AI tools to screen for AI is hypocritical at all. I don't know if that's the point you were trying to imply, but it read that way to me. I think they are an imperfect tool being employed to navigate an otherwise unmanageable situation.

I don't believe there's an option to discard the filtering, manage the increased level of submissions and screen for all of the content that all of us agreed were not allowed on the site when we submitted.

I keep trying to make this case in a way that asks others to view the submission process objectively. In hopes that it helps them see beyond how they feel about technology, rejection, creativity or exclusivity.

Apologies if I misunderstood your post or wasn't clearer in my response.

No worries, nothing to apologize for.

My intention was to point out how the site owner is not completely opposed to the use AI technology to reduce the amount of human interaction needed to manage the site, so they don’t seem to have an all encompassing problem with it.

Therefore the fact that they don’t want to host AI content is probably due to legal issues or aesthetics.

Unlike Amazon, this site doesn’t directly monetize the content, so remaining dedicated to human authors could be an aesthetic choice as well as a business decision - a choice to remain unique and to continue to support writers rather than simply providing erotic content.

I think the criticism some here have where they accuse the manager of being apathetic to authors who are mistakenly flagged is shortsighted. They aren’t considering what it takes to manage the huge number of daily submissions, nor are they considering how AI tech is still evolving and this is part of the development, not the fixed end result.

The development of AI writing has a serious dilemma. On one hand it is being asked to produce content that is indistinguishable from human writing, on the other hand it is being asked to flawlessly differentiate between AI and human content.

šŸ¤·ā€ā™€ļø
 
If only that was what's happening... my experience is that students are using ChatGPT (and others) simply as a shortcut, a way of getting something else to do the work so they don't have to. I see this when they have to complete an assignment in class and, on reviewing the results, I am presented with barely comprehensible crap, in contrast to the lovely essays I am sent when the task has been set as homework. And then, when questioned on the same subject as the homework, the majority of the students are pretty clueless.

Does this hurt you more than it hurts the students? Are you losing sleep over it?

It seems like having periodic tests without the availability of digital devices could go a long way to testing the real knowledge and skills of students, of course then you’d have to grade and handle hard copies instead of digital ones…. Oops!šŸ˜…
 
Does this hurt you more than it hurts the students? Are you losing sleep over it?

It seems like having periodic tests without the availability of digital devices could go a long way to testing the real knowledge and skills of students, of course then you’d have to grade and handle hard copies instead of digital ones…. Oops!šŸ˜…
I wonder if teaching students how to use it and what its shortcomings are would be helpful?

Kind of like when teachers taught students how to use Wikipedia and what it wasn't good for.
 
I've been a behind all my life. No sense getting head now.
 
See how the placement of one little letter can change the meaning of a whole thought?
 
There is no reasonable way to fight this.
I agree. That's why I think Lit. should create an AI category to encourage public disclosure as the norm. So readers can decide whether or not they want to read something not written by a human, and to what degree.
 
I'm sure there will be a subculture of literature longing for 100% human written work (though I doubt there will ever be a way to truly prove that, other than "pinky swear" by the author).
I agree, and so I will repeat my earlier post.
That's why I think Lit. should create an AI category to encourage public disclosure as the norm. So readers can decide whether or not they want to read something not written by a human, and to what degree.
 
I agree. That's why I think Lit. should create an AI category to encourage public disclosure as the norm. So readers can decide whether or not they want to read something not written by a human, and to what degree.
I think that would not satisfy the haters. They would demand there be automatic checks to ENSURE no AI work can get into human written categories and we know how reliable that would be. Even if it would be implemented, we would be constantly facing complaints from self proclaimed style cops, calling people cheats just because their style is not up to whatever standards they hold dear.

Heck, people are called cheaters in the online space all the time just because they are too good, and the average Joe cannot even imagine that to be possible. Why would it be different here?

The only way forward is to let this flame out, weather the storm until people get used to the new tech.
 
Back
Top