Discussions on Generative AI and originality

Brutal_One

Really Really Experienced
Joined
May 26, 2020
Posts
339
This will be a forever debate I imagine but my 2 cents worth.

It is argued there are only 7 ‘types of stories’. They are as follows:-


1. Overcoming the monster​

2. Rags to riches​

3. The quest​

4. Voyage and return​

5. Comedy​

6. Tragedy​

7. Rebirth​


I guess from the Literotica point of view I could list the categories (a different list of course but it’s to categorise from the readers more the type of story genre that readers may like.

But even with Generative AI where does originality start from? Surely it is always the writer, irrespective of what he is writing. Sure you can use tools like Chat GPT and many probably do but doesn’t that require and input, even if it’s an outline.

All writers can ask themselves a question? What story am I going to write?
Of course you can get writers block, but that’s a different problem.

i get we probably already have and will continue to get Generative AI stories - maybe some posted in Literotica perhaps.

But until the advent of conscious AI, and I get that will almost certainly happen in the near future.

Until then I am going to assert that is you, the writers who whichever tool you currently use or plan to use in the future it’s ‘Your’ creativity that is being deployed, your originality. Whatever elements you use to ‘craft’ your story.

I don’t think that will ever change. Of course good and successful authors and writers can leverage on one overall idea or genre and create a compelling series. Or start a new series with a different set of ‘original ideas’.

I don’t really see that changing in future irrespective of what disruption generative AI may bring.

Discuss.

Brutal One
 
I don't know. I'm not an AI expert, or even a well-versed layperson, but I'm inclined to think this is one of those things we cannot predict. AI will be a "singularity" event, past which we cannot possibly see right now.

I suspect humans will find a way to continue be meaningfully creative, because they will want to, and humans eventually find ways of doing what they want to do. But I have no idea how it will happen.
 
Question: Will we even know, if and when it happens?

We don't even have a full understanding of what consciousness is, do we?
I guess it depends on the personality of the emergent AI.

One could argue that the proliferation of bullshit (Tik Tok, Facebook, X, reality tv) could be Skynet creating protective camouflage of aritificial inanity.

:cool:
 
I guess it depends on the personality of the emergent AI.

One could argue that the proliferation of bullshit (Tik Tok, Facebook, X, reality tv) could be Skynet creating protective camouflage of aritificial inanity.

:cool:

If this is so, then I would say the intelligence has been rather effectively disguised.
 
If you want an idea of what the artificial intelligence landscapes may be like in the future, read Ted Chiang's "Lifecycle of Software Objects" The anthology it's in, "Exhalation" is excellent as well, I recommend it to everyone.

They'll need training just like humans do and like your children, what you teach them influences who they are.
 
I do make the assumption it will happen but everyone will form their own view of course. I think if we imagine or better state that consciousness is being aware of your own ‘entities’ existence as an independent entity that can think based on inputs it receives from it’s sensors (human‘s use their five senses to understand the world and thinking what possibilities exist given that information and understanding the ‘entity understands, coupled of course with knowledge learned from experience. Original thinking arises from consciousness. The AI may indeed end up with a different way of ‘thinking’ but it’s outcomes should be the same.

Brutal One
 
If you want an idea of what the artificial intelligence landscapes may be like in the future, read Ted Chiang's "Lifecycle of Software Objects" The anthology it's in, "Exhalation" is excellent as well, I recommend it to everyone.

They'll need training just like humans do and like your children, what you teach them influences who they are.

I think we will end up with a mix of specialist systems.

But I don't think they will ever attain the non-linear flashes of insight that seem to be a hallmark of our species. An AI will be brilliant at iterating variations of existing things, hunting efficiencies along desired traits. I don't think an AI will ever a-priori create something entirely new; I think that will remain a human-specific trait.
 
I think we will end up with a mix of specialist systems.

But I don't think they will ever attain the non-linear flashes of insight that seem to be a hallmark of our species. An AI will be brilliant at iterating variations of existing things, hunting efficiencies along desired traits. I don't think an AI will ever a-priori create something entirely new; I think that will remain a human-specific trait.
I don't think the AI we're developing now will directly evolve or develop into conscious or sentient AI. But I do think we may eventually get there by simulating neurology or something similar.
 
A.I., at least what passes for A.I. currently, is very good at recreating patterns. So it can parse content from its wealth of data and generate something that follows the norms. I think it will get better and better at generating believable, and maybe even interesting, content. But it will be formulaic. A lot of people like formulaic content. But the best art is made by breaking those rules and coming up with new approaches that haven't been done before, or haven't been done in quite the way the artist is doing it. I can't see A.I. learning how to do that any time soon.
 
AI isn't actually intelligent. It's the sum of its programming with built-in growth to learn from the input it receives. But it cannot alter its programming and can't grow beyond its limitations like a human. At least not yet. It cannot create without guidance, and what it does create isn't creative. The program sowers sources pulls and plucks this and that, and rewrites them. That is not creating. It pulls characters out of thin air and inserts them into scenes (because that's what the part it found did) without the benefit of any explanation of where they come from. I'll admit it, I did a trial of several AI writing tools, one specifically for fiction. They suck, their writing sucks, and I won't waste my time rewriting the garbage they produce.

They are illogical story telling, take strange turns, and never produce something you can just plug into a story. Grammar is weak, sentences are long and rambling (after all they charge you for the words they create).
 
View attachment 2284538
The future of the human race.
The biggest mistake they made was changing that plot point.

Originally, humans were used as processing power for the machines. But the studio didn't think that audiences would get it, so they changed it to making humans an energy source. Which is beyond stupid, especially since the machines had nuclear plants. But anyway.
 
They could've stopped with the first film, as far as I'm concerned. Some tales don't need sequels.
The biggest mistake they made was changing that plot point.

Originally, humans were used as processing power for the machines. But the studio didn't think that audiences would get it, so they changed it to making humans an energy source. Which is beyond stupid, especially since the machines had nuclear plants. But anyway.
 
This will be a forever debate I imagine but my 2 cents worth.

It is argued there are only 7 ‘types of stories’. They are as follows:-


1. Overcoming the monster​

2. Rags to riches​

3. The quest​

4. Voyage and return​

5. Comedy​

6. Tragedy​

7. Rebirth​


I guess from the Literotica point of view I could list the categories (a different list of course but it’s to categorise from the readers more the type of story genre that readers may like.

But even with Generative AI where does originality start from? Surely it is always the writer, irrespective of what he is writing. Sure you can use tools like Chat GPT and many probably do but doesn’t that require and input, even if it’s an outline.

All writers can ask themselves a question? What story am I going to write?
Of course you can get writers block, but that’s a different problem.

i get we probably already have and will continue to get Generative AI stories - maybe some posted in Literotica perhaps.

But until the advent of conscious AI, and I get that will almost certainly happen in the near future.

Until then I am going to assert that is you, the writers who whichever tool you currently use or plan to use in the future it’s ‘Your’ creativity that is being deployed, your originality. Whatever elements you use to ‘craft’ your story.

I don’t think that will ever change. Of course good and successful authors and writers can leverage on one overall idea or genre and create a compelling series. Or start a new series with a different set of ‘original ideas’.

I don’t really see that changing in future irrespective of what disruption generative AI may bring.

Discuss.

Brutal One
Keep in mind how AI works, especially LLM(Language Learning Models). After they learn grammar and sentence structure, they work by putting in the most likely word to come next in a sentence according to instructions until they finish their output and each sentence. They use the likeliest words and form an idea. The more intelligent models can be more complex to even be able to analyze entire sentences with context of the entire sentence they read or write, even being able to understand deeper meanings like figurative speech and metaphors. This is done by "fine tuning" from constant examples and input from humans interacting with it during its training, which they put in the examples of figurative speech, logic(like math, context of things, and logic puzzles for AI to solve, explanations of the data, etc until it starts consistently giving the right answers.

While LLM AI is capable of creativity after enough training and fine-tuning of its data, it in the end, is just a very smart auto sentence generator/completer. It can only really follow and utilize the logic behind data already given. All of its creativity is based on that data and how it can mix it to create a cohesive answer which how "well" it can create content is entirely up to the quality and efforts of how it's trained and fine-tuned. It can't really generate data that is outside of its data and training. Also, ChatGPT with its alignment training(making sure its outputs are clean and ethical to the standards of OpenAI, or Anthopic for Claude) sterilizes its word usage and kind of dumbs it down for story writing purposes, and despite its ability to remember details from inputs and outputs it does it only so well and will tend to veer off parts of the context it's supposed to remember for a story for example.

And that's not mentioning the context token limits these AI have(2000- 4000 for most, though 8000 is now starting to be commonplace.) Each token is about 4 characters long and the more demanding the token length, the closer you get to hitting that token limit, the more it'll make mistakes in following or remembering the details of a plot and the logic. The token counter is taken up not just by its output but user input and instructions also are counted for the context token limit.

Some have 32k(GPT-4 32k) even up to 100k(claude), which is very impressive but they will still make mistakes in their analysis or output for plot details. Especially if its a request for making a plot with a lot of details, or if trying to continue a story without forgetting important context. And AI like GPT will be affected by its alignment and ethical training(Ie. Everyone lives happily ever after, everyone is nice, positivity from everyone is great and avoids harmful content. Conflicts are easily resolved quickly by everyone getting along and automatically understanding and being super positive, fighting is wrong, and getting along is super important, you guys! Let's be G-rated, everyone! Also, sex is naughty, naughty.)

I know AI is rapidly evolving and they are being able to do more and make less mistakes than before, but when it comes to true creativity for plot's sake, usually it's better to do it yourself. AI is better at giving ideas to play with or helping to analyze and give suggestions for prose(like telling you or giving you ideas for if your story is fluid, your prose is good, and if your characters have depth. ChatGPT and Claude are surprisingly good at giving that kind of analysis and constructive criticism) if you have writer's block. But I think if you need that, probably input from a friend, family, or some sort of online group for writers might still be better.
 
Last edited:
This story concerns graphic generative AI rather than text, and I don't know if any text equivalent is possible, but it's an interesting development: artists who object to their work being used to train AI can "poison" images with tiny changes that are imperceptible to a human but will sabotage an AI trained on them.

https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/
I'm having a really hard time understanding how this is supposed to work. If the changes are invisible, how do they affect anything? Is it just inaccurate metadata, like a description that says 'cat' on a picture of a dog? I doubt that's it but if so, just ignore metadata on scraped images. If it's changes to pixels, but you can't see them, I don't get it. It's still a picture that looks like a dog, even if the pixels are tweaked somehow. But apparently it works? So it must be doing something with the pixels that can't be seen. Okay, if the changes can't be seen, just photograph the work and then scrape that photo. Heck, can probably just run it through a filter or save it as a jpeg, either should alter the pixels.
 
I'm having a really hard time understanding how this is supposed to work. If the changes are invisible, how do they affect anything? Is it just inaccurate metadata, like a description that says 'cat' on a picture of a dog? I doubt that's it but if so, just ignore metadata on scraped images. If it's changes to pixels, but you can't see them, I don't get it. It's still a picture that looks like a dog, even if the pixels are tweaked somehow. But apparently it works? So it must be doing something with the pixels that can't be seen. Okay, if the changes can't be seen, just photograph the work and then scrape that photo. Heck, can probably just run it through a filter or save it as a jpeg, either should alter the pixels.
I haven't read it, but it's probably stenographic. It's a method of embedding data in the image itself. Not visible to the eye, but readable by computer.

Edit: it's not clear exactly how it works from the article. But they mention taking it open source, and as soon as they do that, it will be reverse engineered.

There is also the aspect that you have to use the poisoning as a service, so they are angling for customers to pay them.

We'll see it this is something that really works. For models that are already built, it'll have little impact because they take months to generate. And the size of the models will require large amounts of poisoned images to have an impact on new models.
 
Last edited:
I'm having a really hard time understanding how this is supposed to work. If the changes are invisible, how do they affect anything? Is it just inaccurate metadata, like a description that says 'cat' on a picture of a dog? I doubt that's it but if so, just ignore metadata on scraped images. If it's changes to pixels, but you can't see them, I don't get it. It's still a picture that looks like a dog, even if the pixels are tweaked somehow. But apparently it works? So it must be doing something with the pixels that can't be seen.

I haven't seen specifics of how this particular method works, but I think the underlying idea is that the things an image-recognition algorithm uses to classify an image aren't necessarily the same as the things a human will focus on.

For instance, a human glancing at this would probably just think "cat". (M.C. Escher's "White Cat I, 1919" to be precise.)
1698714623927.png

But if I load it into Gimp and turn the brightness way down, and the contrast way up, I get this:

1698714737341.png

Here, I've overlaid White Cat I with another Escher, "Two Birds", with opacity set to 0.005. If you look closely at the original you might be able to see a hint of it in the black area below the signature, but on the white fur it's pretty much invisible to a human eye without processing.

An image-recognition algorithm processing the first image will "see" these birds. The colour difference between "bird" and "bird outline" is small, too small for us to see in most parts of the image, but it's sharp and consistent and that's an easy thing to pick up for an algorithm that has access to the exact colour values. These bird shapes are very consistent and they're repeated many times in this image.

Meanwhile, if I'm training it on a bunch of other images, the algorithm "knows" that brightness can vary greatly from one image to another, and between different parts of the same image, and that often what matters is local variations. So if I feed it my modified version of White Cat I, tagged as "cat", along with a bunch of similarly-modified images where the cats are different but the birds remain the same, there's a risk that it'll end up learning that "cat" looks like this:

1698715997496.png

What Glaze/Nightshade is doing won't be anything as simple as just overlaying a faint image of dogs on an image of a cat, but I suspect it's based on a similar idea.

Okay, if the changes can't be seen, just photograph the work and then scrape that photo. Heck, can probably just run it through a filter or save it as a jpeg, either should alter the pixels.

Depending on how it's been altered, you can do things like that. For my particular example, you could probably get rid of the birds without losing much of the cat by compressing colours (anything close to black becomes perfect black, anything close to white becomes perfect white) or by applying Gaussian blur with a radius wide enough to remove the definition on those birds.

But if you don't know whether or how the image has been altered, figuring out exactly what treatment you need to apply to purge that "poison" without destroying the aspects that are important to a human is a hard problem.

Consider this picture. I made this by creating a black background with a white frame (which isn't going to show very well against a white forum screen but I promise it's there) and then overlaying Two Birds with opacity set to 0.005 again, just like I did on White Cat I.

1698717172393.png
Most humans would label this as "birds" (edit: at least if you turn up the brightness a bit). Just like the first image, there's a very faint bird image overlaid on a strong black-and-white image. But in the first one, the black-and-white image was something we care about, and in the second it's not.

A filter that will purge the subliminal birds from modified White Cat I has an excellent change also purging the non-subliminal birds from the latter image. Part of the method discussed for Glaze/Nightshade involves changing the details of the "poison" from one picture to the next.

Ultimately, the goal for this isn't to make it impossible to train AI on modified images; it's to make it enough of a nuisance that it's easier to go do something else, like restrict the training set to public-domain. The appeal of generative AI isn't that it does stuff humans can't, but that it piggybacks on work humans already did for cheaper than hiring a human. The more effort you have to put into bypassing those protections, the less appealing it is as an option.

(Plus, if you do put work into bypassing such protections, that's pretty strong evidence that you intended to train on material created by somebody who didn't permit that use. I have no idea how the legal side of all this is going to shake out, but it's possible that some court somewhere will decide that generative AI infringes on the copyrights of the originals; if it does, that kind of evidence might be inconvenient.)

I haven't read it, but it's probably stenographic. It's a method of embedding data in the image itself. Not visible to the eye, but readable by computer.

Edit: it's not clear exactly how it works from the article. But they mention taking it open source, and as soon as they do that, it will be reverse engineered.

There is also the aspect that you have to use the poisoning as a service, so they are angling for customers to pay them.

Granted, promises can be broken, but this seems to indicate otherwise:

https://glaze.cs.uchicago.edu/what-is-glaze.html

"There is no business model, no subscription, no hidden fees, no startup. We made Glaze free for anyone to use, but not open source, so as to raise the bar for adaptive attacks. Glaze is designed from day 1 to run without a network, so there is no data (or art) sent back to us or anyone else. The only communication Glaze has with our servers is periodic "pings" to ask if there are new software updates."

According to that page, it's not steganographic, but they're a little vague about exactly what it is.

(I note that the first article says they're considering making Nightshade OS, but this one says Glaze isn't OS. IDK how those reconcile - maybe Nightshade specifically is going to be OS but the rest of Glaze won't be?)

We'll see it this is something that really works. For models that are already built, it'll have little impact because they take months to generate. And the size of the models will require large amounts of poisoned images to have an impact on new models.

Perhaps. From the MIT article, they tested on Stable Diffusion XL, which has over a hundred million images in its training data. Just fifty poisoned images were enough to seriously fuck up its ability to draw "dog" and three hundred were enough to turn "dog" into cats. But I don't know the details of whether they were able to weight the poisoned images more heavily than the original training images.
 
Last edited:
I think we will end up with a mix of specialist systems.

But I don't think they will ever attain the non-linear flashes of insight that seem to be a hallmark of our species. An AI will be brilliant at iterating variations of existing things, hunting efficiencies along desired traits. I don't think an AI will ever a-priori create something entirely new; I think that will remain a human-specific trait.
No AI system will ever replicate, duplicate, or simulate your girl crush inclinations, Wanda. The world remains safely in orbit, while you do. :).
 
Don’t be afraid to ask AI questions. The answers will tell you a lot about the considerations it uses to craft its responses as well as providing concepts that may not have occurred to you before.

Its probably more helpful for research than for creative input.


Alex: Describe common types of story arcs in fiction writing.

ChatGPT:
Common types of story arcs in fiction writing include:

1. The Hero's Journey: This classic arc follows a hero who faces challenges, goes on an adventure, and ultimately grows and transforms through their experiences.

2. Three-Act Structure: It consists of setup, confrontation, and resolution, providing a clear beginning, middle, and end to the story.

3. The Coming of Age: This arc revolves around a character's growth and self-discovery as they transition from youth to adulthood.

4. Tragedy: A story arc where the protagonist's journey ends in a negative outcome, often due to their own flaws or external circumstances.

5. Quest Narrative: The characters embark on a journey or quest to achieve a specific goal, facing obstacles and gaining insights along the way.

6. Rags to Riches: The protagonist starts in a lowly position and rises to success or wealth, often involving personal development.

7. Rebirth: The protagonist undergoes a transformation or redemption, emerging as a better person by the end of the story.

8. Love Story: Focuses on the development of a romantic relationship, with various obstacles and conflicts to overcome.

9. Overcoming the Monster: The protagonist faces a formidable adversary or threat that they must conquer, often involving physical or metaphorical monsters.

10. Man vs. Society: This story arc explores the conflict between an individual and society, highlighting themes of rebellion, conformity, or injustice.

These are just a few examples, and many stories blend elements from multiple arcs to create unique narratives.

—————

Alex: What are some other examples?

ChatGPT:
Certainly, here are some additional examples of story arcs in fiction writing:

11. Comedy of Errors: Typically involves mistaken identities, misunderstandings, and humorous situations, leading to a lighthearted resolution.

12. Fish out of Water: A character is placed in an unfamiliar environment, and the story focuses on their struggles and eventual adaptation.

13. Survival: Characters must contend with extreme circumstances, such as being stranded in the wilderness, and the arc revolves around their efforts to survive.

14. Revenge: The protagonist seeks retribution for a past wrong, driving the plot through their pursuit of justice.

15. Mystery: A story arc centered around solving a puzzle or uncovering hidden truths, often involving detectives or amateur sleuths.

16. Political Intrigue: Involves power struggles, manipulation, and complex political maneuvering within a society or organization.

17. Time Travel: Characters navigate through different time periods, dealing with the consequences of their actions in the past or future.

18. Parallel Universes: The narrative explores alternate realities or dimensions, where characters interact with different versions of themselves or the world.

19. Supernatural: Involves paranormal elements, like ghosts, vampires, or magic, and how they impact the characters and plot.

20. Road Trip: Characters embark on a journey, both physical and emotional, often leading to self-discovery and personal growth.

These arcs provide a wide range of storytelling possibilities, allowing authors to craft diverse and engaging narratives.
 
Last edited:
Back
Top