AI Allegations Thread

Every generation has said that about the generation coming after them, but now it actually seems to have some merit.
Of course every generation thinks the next one is lazier and softer, and I didn't put too much stock into it, but I think the birth of the internet itself gave it some merit, and AI makes it full blown truth. My grand kids are not being taught how to write because...who writes any more? By the time he's old enough to drive it will be self driving cars

but its the ability to reason, think, and problem solve that AI is removing, as well as any type of creativity. "Look AI wrote a story for me, I'm a writer! Look, it made this picture, I'm an artist!"

There's an ad for Canva on You tube where the woman pulls up some program and it runs an office event for her, and it ends with 'what will you create next?' and my wife-who is nowhere near as cynical as I am-says "You didn't create anything for fuck's sake, the program did it"
 
Also, I have to ask, who is actually using AI's to write entire stories? I know that was a concern in Hollywood which is why there was a strike, but still I can't imagine someone would use it to write short stories on websites such as Literotica. Does anybody know of any examples for Lit or anywhere else where people are actually using AI to wrote stories.



....
The problem is pretty widespread. Out in the real world, it's a problem for many of your pay markets (you can search the internet for "submissions closed or paused due to AI". Here is just one of the various articles).

https://www.theverge.com/2023/2/25/...terary-magazines-clarkesworld-science-fiction

We'll eventually get it sorted out, but it'll be a bumpy road for a while.

My personal opinion - within a few years AI will get good enough that it will be virtually impossible to tell an AI writer from a human writer, but that is still a ways off. Hopefully by then the AI providers will incorporate a watermark, similar to DMARC on the email side, to aid the publishers, both real world and online.
 
Last edited:
Also, I have to ask, who is actually using AI's to write entire stories? I know that was a concern in Hollywood which is why there was a strike, but still I can't imagine someone would use it to write short stories on websites such as Literotica. Does anybody know of any examples for Lit or anywhere else where people are actually using AI to wrote stories.



....

I think it's much, much more likely to happen with work for sale, rather than work offered free. An AI could, theoretically, churn out a bunch of mediocre content that could be sold cheaply, and I doubt it would take a shortcutter all that long to figure out a workflow that would maximize sales while minimizing effort.

On a free site, it makes MUCH less sense. Unless a writer is so hopelessly lacking in self-confidence as to believe an AI could write better than they, or unless it's an innocent mistake involving AI-generated sentence/clause substitution during the course of a robotic "editorial" pass.

I've never liked these kinds of programs. Even spellcheck rankles when it intrusively suggests things to me. I disable it whenever possible.
 
Every generation has said that about the generation coming after them, but now it actually seems to have some merit.
With advancements of technology, how long can that expression be refuted? People already can't drive manuals, now they're having trouble even focusing on driving. First the cars were parallel parking for people, soon they'll be doing all the work. Who's touched an encyclopedia or visited a library for research when it's all on a phone? I don't need books on writing when I can google it, or ask folks here. There's already pre written schoolwork one can get online and by now I'm sure it doesn't even need to be paid for. "Write this, so I don't need to think," isn't that far away. People are already saying "draw this, so I don't need to learn photoshop."
 
Of course every generation thinks the next one is lazier and softer, and I didn't put too much stock into it, but I think the birth of the internet itself gave it some merit, and AI makes it full blown truth. My grand kids are not being taught how to write because...who writes any more? By the time he's old enough to drive it will be self driving cars

but its the ability to reason, think, and problem solve that AI is removing, as well as any type of creativity. "Look AI wrote a story for me, I'm a writer! Look, it made this picture, I'm an artist!"

There's an ad for Canva on You tube where the woman pulls up some program and it runs an office event for her, and it ends with 'what will you create next?' and my wife-who is nowhere near as cynical as I am-says "You didn't create anything for fuck's sake, the program did it"
"I'm an artist!"
No... you're just really really good at boolean searches. I admit after trying ai art... and I really don't like calling it art, trying to get what I wanted, was pretty hard.
 
With advancements of technology, how long can that expression be refuted? People already can't drive manuals, now they're having trouble even focusing on driving. First the cars were parallel parking for people, soon they'll be doing all the work. Who's touched an encyclopedia or visited a library for research when it's all on a phone? I don't need books on writing when I can google it, or ask folks here. There's already pre written schoolwork one can get online and by now I'm sure it doesn't even need to be paid for. "Write this, so I don't need to think," isn't that far away. People are already saying "draw this, so I don't need to learn photoshop."
this isn't new. Books on writing were deprecated by blog posts on writing; the information is the same within one or two degrees of freedom for modernization, it's simply the mechanism of delivery that has changed. You still need to be able to read it (or listen to it if it's narrated) in order to apply it.

A car is a mechanical contraption that (for most people) is purchased as a (one could say mandatory) method of saving time. The difference between a manual and automatic transmission makes zero difference to the primary function of a car - that is, getting you from A to B in less than a day. (I grew up driving manual cars, I owned an audi S3, and I currently drive a computerised starship because it's convenient.) Manual cars are becoming an anachronism - like piston-engined aircraft, there will always be some around, but the dominant paradigm has moved on.

School's ultimate function is to teach you to think. Leaving aside the argument as to whether that's what it actually accomplishes, when you come out of school you are supposed to possess a baseline ability in mathematics and language and some exposure to art, science, geography and history - enough to be aware that the subjects exist, which for a subset of people is often enough to nudge them on the correct path. Some people never achieve that, and for some people that's simply not something they see as important. The presence or absence of online schoolwork makes no difference to these people bar permitting them a slightly easier path through a mandated set of hoops.

At a fundamental level, I believe, AI systems don't replace people. They change the power structures and accessibility structures for some forms of creative or deductive work, and the jury is out on whether that's a beneficial thing or not. There may be some roles where AI can replace significant parts of a person's job, but IMO a smart person will leverage that in new and creative ways - say what you want, but we're a very creative species.

A big one in my industry is AI-generated code for algorithms or system integrations. It's very cool to be able to ask an AI assistant "hey, write me some code that integrates with this system" and get a wodge of three hundred lines that I didn't have to write. Except that, now, as an experienced engineer, the very first thing I do is vet it. An AI can use formal proofs to prove that a set of code is safe, but it has NO FUCKING IDEA about the rest of the system that the code will be used in - for that you need a human mind... for now ;)

My personal opinion - within a few years AI will get good enough that it will be virtually impossible to tell an AI writer from a human writer,
My personal opinion is that there will be an entirely new industry spawned - people who are very good at spotting AI-generated content and flagging it.

At this point the genie is out of the bottle, and the best we can do is adapt.

oh. Shit. I was ranting again.
 
you mean witch hunters?

Only there really are witches...

That said outside academia people are going to rapidly stop caring.
A well written story is a well written story and the vast majority of consumers don't care who wrote it. They aren't going to boycott something that was AI written if it's a quality product.
The current gatekeepers are fighting a losing battle.
 
this isn't new. Books on writing were deprecated by blog posts on writing; the information is the same within one or two degrees of freedom for modernization, it's simply the mechanism of delivery that has changed. You still need to be able to read it (or listen to it if it's narrated) in order to apply it.

A car is a mechanical contraption that (for most people) is purchased as a (one could say mandatory) method of saving time. The difference between a manual and automatic transmission makes zero difference to the primary function of a car - that is, getting you from A to B in less than a day. (I grew up driving manual cars, I owned an audi S3, and I currently drive a computerised starship because it's convenient.) Manual cars are becoming an anachronism - like piston-engined aircraft, there will always be some around, but the dominant paradigm has moved on.

School's ultimate function is to teach you to think. Leaving aside the argument as to whether that's what it actually accomplishes, when you come out of school you are supposed to possess a baseline ability in mathematics and language and some exposure to art, science, geography and history - enough to be aware that the subjects exist, which for a subset of people is often enough to nudge them on the correct path. Some people never achieve that, and for some people that's simply not something they see as important. The presence or absence of online schoolwork makes no difference to these people bar permitting them a slightly easier path through a mandated set of hoops.

At a fundamental level, I believe, AI systems don't replace people. They change the power structures and accessibility structures for some forms of creative or deductive work, and the jury is out on whether that's a beneficial thing or not. There may be some roles where AI can replace significant parts of a person's job, but IMO a smart person will leverage that in new and creative ways - say what you want, but we're a very creative species.

A big one in my industry is AI-generated code for algorithms or system integrations. It's very cool to be able to ask an AI assistant "hey, write me some code that integrates with this system" and get a wodge of three hundred lines that I didn't have to write. Except that, now, as an experienced engineer, the very first thing I do is vet it. An AI can use formal proofs to prove that a set of code is safe, but it has NO FUCKING IDEA about the rest of the system that the code will be used in - for that you need a human mind... for now ;)


My personal opinion is that there will be an entirely new industry spawned - people who are very good at spotting AI-generated content and flagging it.

At this point the genie is out of the bottle, and the best we can do is adapt.

oh. Shit. I was ranting again.
You make a good point on many things but I believe you are also oversimplifying the problem in certain aspects.
The availability of studying material is definitely a good thing. I remember having to search for some things in encyclopedias and being unsatisfied with the results. It used to be a tedious process. Now it is all a thousand times easier. But...
The concerns about studying and learning are justified IMO. You said it yourself, apart from teaching some basic and maybe some advanced knowledge too, the primary concern of school is to teach students how to think. Teach them to think critically about the world, teach them to apply the scientific method, and teach them how to learn. In my own teaching experience, students learn best when they are forced to do something by themselves, when they get to the result through trial and error. That usually teaches them how to approach the problem from different sides. It is not the only way of teaching of course, but I am just giving an example.
My point is that few students will put time and effort into doing work and solving some problem if the solution and all the work are already available online or by making a query to the AI. It is not how they function. So if everything is available easily, how exactly can we teach them to think? How can we force them to apply their wits when all the answers lie one click away?
I know we can't really fight the progress of AI but we should start working on guidelines on how to prevent students from becoming lazy copy-paste drones. Right now, I believe no one has any clue how to do that.

I am also very skeptical about people or tools being able to spot AI-generated content in the future. AI can be taught to use slang, to insert a grammatical error or two, and to apply other methods to fake a human author. This problem needs some proper approach and prompt reaction, yet we are already behind in developing strategies and guidelines.
 
I think the real impact of AI is years out and it isn't going to happen in the LLM (large language model) or AGI (artificial general intelligence) level. It's going to be visible to us in a meaningful way in the specialized systems. One topic or subject matter containing the sum total of all the knowledge about that topic, with safeguards against bias or hallucination. Where you can ask the system a question and be assured of the right answer every time.

Attaining that is fairly simple - create the model, populate the model, then train the shit out of it, then lock it down. That's not AI though. That is just a highly specialized information retrieval system.

I spent years working on the practical side of AI - how to actually implement it in the work environments and almost without exception it fell short due to a business reason. Training the AI (which requires a human or humans) is expensive. From a practical standpoint there was no significant ROI. There was a cost shift, but minimal to no cost saving.

Take something relatively simple, like writing press releases for a companies product line. Implementing AI on a practical level meant shifting cost from a content writer to IT and a content reviewer, with no cost saving and often, at the end of it, added cost.

Even now, when you look at the various groups and individuals evangelizing AI (or predicting the doom of all) and they're 99% in the business of owning or selling AI. So, in their own way, they're just the used car salesmen of the AI world.
 
Take something relatively simple, like writing press releases for a companies product line. Implementing AI on a practical level meant shifting cost from a content writer to IT and a content reviewer, with no cost saving and often, at the end of it, added cost.

Even now, when you look at the various groups and individuals evangelizing AI (or predicting the doom of all) and they're 99% in the business of owning or selling AI. So, in their own way, they're just the used car salesmen of the AI world.
It's the same with translation, which I encounter in my own work.
For decades, the machine translation companies have been swearing that their products create translations that are nearly as good as human translation. Yet every single time, when they're compared side-by-side, it's not even close. But the companies pour so much money into marketing that the public believes it.

The trouble is that MT is great for "gisting": getting a rough idea of what a text says. When you're on holiday and want to read a menu in a foreign language, for instance. It's for information "consumption", not information "production". If you're a lawyer and you send a foreign client a memo or opinion that you've translated using Google Translate or DeepL, your client won't be happy with what they read. And that's entirely beside the IP and confidentiality issues of uploading the information.

I've done some post-editing on DeepL translations, and I can tell you it's a nightmare. It's inconsistent (the same word might be translated in three different ways in four sentences), it has a tendency to skip sentences or paragraphs, it sometimes includes whole lines of text that have nothing to do with the source.

But you'll only see these things when you look at the text word for word. At a glance it all looks fine, and so people buy the claims that it's as good as a human translator's work.

Tip: if you ever need to do business in a foreign language, use a human translator.
 
It's the same with translation, which I encounter in my own work.
For decades, the machine translation companies have been swearing that their products create translations that are nearly as good as human translation. Yet every single time, when they're compared side-by-side, it's not even close. But the companies pour so much money into marketing that the public believes it.

The trouble is that MT is great for "gisting": getting a rough idea of what a text says. When you're on holiday and want to read a menu in a foreign language, for instance. It's for information "consumption", not information "production". If you're a lawyer and you send a foreign client a memo or opinion that you've translated using Google Translate or DeepL, your client won't be happy with what they read. And that's entirely beside the IP and confidentiality issues of uploading the information.

I've done some post-editing on DeepL translations, and I can tell you it's a nightmare. It's inconsistent (the same word might be translated in three different ways in four sentences), it has a tendency to skip sentences or paragraphs, it sometimes includes whole lines of text that have nothing to do with the source.

But you'll only see these things when you look at the text word for word. At a glance it all looks fine, and so people buy the claims that it's as good as a human translator's work.

Tip: if you ever need to do business in a foreign language, use a human translator.
Yeah, that is very similar to my experiences. In the example I gave above the company spent multiple years trying to implement the AI the vendor swore could write. We set up and did a series of blind tests (written by AI v. written by a person).

At the end of the day, after hundreds of thousands of dollars and thousands of hours working with the vendor training the system, a person could still write better copy, hands down. Novice PR people could write better. Experienced PR people could write circles around it.

The human eye, like the human bullshit detector, could pick out the AI content easily once they'd had any experience.
 
Back
Top