Coleoidphilia cover illustration

The series submission page is a relatively new Lit development - I mean after I got here. Could I use it for an old, existing series - one that was completed nearly three years ago?
Yes, you can.

I was able to use the new Series process to fix some story sequencing that got cocked up three or four years ago. It took a couple of goes to figure it out, but I now have two Series in the right order, and no duplicates in my story list.

I've also got some Series designated "complete".
 
The 'system' designs itself. It must learn templates, a pattern of organization out of chaos, in the same way as everything else, in parallel with everything else. The only affordance provided is Yes or No.
A human software architect should be able to provide a set of basic human anatomy templates. That's the designer's omission, right from the start.

The "AI" doesn't need to "learn" anything - it can be given a set of basic rules. Like "count the fucking fingers". I can't believe the software designers didn't think of doing that.
 
A human software architect should be able to provide a set of basic human anatomy templates. That's the designer's omission, right from the start.

The "AI" doesn't need to "learn" anything - it can be given a set of basic rules. Like "count the fucking fingers". I can't believe the software designers didn't think of doing that.
Yes. Obviously, not the sharpest knives in the drawer. Imagine what'll happen when they get smart folk working on it.
 
A human software architect should be able to provide a set of basic human anatomy templates. That's the designer's omission, right from the start.

The "AI" doesn't need to "learn" anything - it can be given a set of basic rules.

No, it can't - or at least, not trivially so, not the kind of rules you're talking about.

Like "count the fucking fingers". I can't believe the software designers didn't think of doing that.

Listen to your disbelief. It's not a matter of "nobody on this huge and expensive project thought to do this simple and obvious thing". It's that there is no straightforward way to tell it that rule in terms that it can interpret.
 
No, it can't - or at least, not trivially so, not the kind of rules you're talking about.

Listen to your disbelief. It's not a matter of "nobody on this huge and expensive project thought to do this simple and obvious thing". It's that there is no straightforward way to tell it that rule in terms that it can interpret.
That's being a bit obtuse. I can name dozens of basic drawing anatomy books, how to draw guides, books on how to draw figures, with a set of fundamental rules on human proportion, human anatomy. I cannot see why those templates don't sit underneath every render, every image, that's ever drawn by a machine. That's how facial recognition software works.

I'm obviously missing something very fundamental here - these visual tools are "trained" somehow by sampling millions of images. So why doesn't the rendering tool have Drawings 101 embedded? It's a pretty obvious thing to do, surely?

Grammarly has the fundamental rules of grammar embedded. What's the difference? There's a set of human anatomy rules - why aren't they in the algorithm?

Can someone explain that to me? I'm not being bloody minded here, it's how I learned to draw human figures. By following templates, examples, samples. I didn't do it a million times, sure, but at least I can count fingers.

It's not disbelief, it's a complete lack of understanding. Have these software developers never seen a book by Bridgeman? Or Leonardo, for goodness sake - he was pretty good at drawing figures. Michelangelo? The Mona Lisa, gee look, she's not cross-eyed.

You do understand my cynicism? These things might be visual, but they're a long way from being art.
 
Last edited:
Are you being obtuse? I can name dozens of basic drawing anatomy books, how to draw guides, books on how to draw figures, with a set of fundamental rules on human proportion, human anatomy. I cannot see why those templates don't sit underneath every render, every image, that's ever drawn by a machine. That's how facial recognition software works.
Facial recognition doesn't understand anatomy either. It looks at distances between facial elements. It doesn't know any more about anatomy than SD does. And the statistical correlations between features is a lot of what the AI does, faces tend to look similar for this reason.

Stable Diffusion (and all the other generative programs) are not 3D modeling software.

I'm obviously missing something very fundamental here - these visual tools are "trained" somehow by sampling millions of images. So why doesn't the rendering tool have Drawings 101 embedded? It's a pretty obvious thing to do, surely?

If you trained a model on drawings, it would know how to create drawings.

Grammarly has the fundamental rules of grammar embedded. What's the difference? There's a set of human anatomy rules - why aren't they in the algorithm?
Can Grammarly write a story? It has all the rules of writing. ChatGPT is the same really, all it knows is statistical likelihoods of the next word in a sequence. It doesn't really understand grammar, but it seems like it does.

In the case of SD there is an extension that will allow you to make repeatable poses. But the underlying tech has no ability to do that.

Can someone explain that to me? I'm not being bloody minded here, it's how I learned to draw human figures. By following templates, examples, samples. I didn't do it a million times, sure, but at least I can count fingers.

It's not disbelief, it's a complete lack of understanding. Have these software developers never seen a book by Bridgeman? Or Leonardo, for goodness sake - he was pretty good at drawing figures. Michelangelo? The Mona Lisa, gee look, she's not cross-eyed.

What you aren't understanding is that the AI is not intelligent. It's neural networks and algorithms. It's doing math, and the results look like what it has seen in it's training data.

You do understand my cynicism? These things might be visual, but they're a long way from being art.

That's debatable. This is a tool, like a paint brush or a stylus. Artists use tools to create art.

In my case from this thread, I generated several images from a prompt, and picked one that I felt represented the drawing. I then had it generate several versions of hands, and picked the one that looked they best.

If I was doing more than a quick and dirty illustration, I could have taken it into photoshop and did fine tuning there.

It did the work, but I was directing it, and picked what appealed to my eye.
 
Are you being obtuse? I can name dozens of basic drawing anatomy books, how to draw guides, books on how to draw figures, with a set of fundamental rules on human proportion, human anatomy. I cannot see why those templates don't sit underneath every render, every image, that's ever drawn by a machine. That's how facial recognition software works.

I'm obviously missing something very fundamental here - these visual tools are "trained" somehow by sampling millions of images. So why doesn't the rendering tool have Drawings 101 embedded? It's a pretty obvious thing to do, surely?

Grammarly has the fundamental rules of grammar embedded. What's the difference? There's a set of human anatomy rules - why aren't they in the algorithm?

Can someone explain that to me? I'm not being bloody minded here, it's how I learned to draw human figures. By following templates, examples, samples. I didn't do it a million times, sure, but at least I can count fingers.

It's not disbelief, it's a complete lack of understanding. Have these software developers never seen a book by Bridgeman? Or Leonardo, for goodness sake - he was pretty good at drawing figures. Michelangelo? The Mona Lisa, gee look, she's not cross-eyed.

You do understand my cynicism? These things might be visual, but they're a long way from being art.
I think you may have a very fundamental misperception about how these things work. And no, I probably can't explain it very well. I don't so much understand what it is, as I (think I) understand what it isn't.

Grammerly is a bunch of rules, a complex algorithm for comparing text patterns to predefined rules and structures that the creators of the tool specified.

None of that happened with AI art, as I understand it. It's more like a toddler figuring out how to draw by learning from millions of images. So the toddler draws stuff that it thinks looks like the millions of images. But it doesn't know what any of the images are, or what any of the components or pieces in the images are. It is aware of colors and patterns. It is not programmed with any specific behavior, like "hands have 5 fingers", furthermore it is not capable of understanding what fingers are. We would probably have to design an AI software system differently from the ground up to get behavior like "I know what fingers are, and how anatomy works," vs "I recognize colors and patterns and try to create similar ones "
 
Yes, you can.

I was able to use the new Series process to fix some story sequencing that got cocked up three or four years ago. It took a couple of goes to figure it out, but I now have two Series in the right order, and no duplicates in my story list.

I've also got some Series designated "complete".
Okay, I'll take a look at it agan. Lit's own description of it says that a series can go from "automatic" to "manual" mode if one choose to do so. However, once done, you can't go back again.

Lit seems to hint that automatic mode might be better because no intervention is needed. Manual gives full control but also full responsibility to the author.

The two series I'm thinking about look okay to me (one is close to three years old) so I don't know if it's worth bothering with them. On one, the last chapter does have a note at the bottom saying that his is the end. It also says that the same plot will start up again in a new series (the timeline will start about two weeks later) but I haven't written that one yet.
 
The two series I'm thinking about look okay to me (one is close to three years old) so I don't know if it's worth bothering with them. On one, the last chapter does have a note at the bottom saying that his is the end. It also says that the same plot will start up again in a new series (the timeline will start about two weeks later) but I haven't written that one yet.
If the Series are properly connected already, then the auto-build is enough - "if it ain't broke, don't fix it" is always good advice.

I think you have to go into Manual mode to designate it "complete" - but whether or not readers pay any attention to that, who knows? I suspect most writers don't even know that tick box exists, let alone use it.
 
That's being a bit obtuse. I can name dozens of basic drawing anatomy books, how to draw guides, books on how to draw figures, with a set of fundamental rules on human proportion, human anatomy. I cannot see why those templates don't sit underneath every render, every image, that's ever drawn by a machine. That's how facial recognition software works.

I'm no expert on facial recognition, but to my understanding, this is correct: FR software is programmed to recognise specific features that characterise the human face - eyes, nose, mouth, etc. - and identify their proportions, which then gives a 'fingerprint' used to distinguish the face from others. If you dug into the code, you'd find variables corresponding to human-interpretable concepts like "distance between pupils", "width of mouth", etc. etc. (There'll also be something like a neural net trained to recognise where the eyes are.)

This can be quite effective but it also has its limitations. It's very specific to its purpose. A FR system built for identifying humans is not likely to be useful for distinguishing cats, or bugs, or art styles. (Indeed, a lot of the point of FR is filtering out things like differences in lighting, perspective, etc. etc. that are unwanted complications when trying to identify Carmen Sandiego but might be very important to an artist.) Something that depends on producing a mathematical summary of a human face is absolutely useless when trying to distinguish between these two images:

https://upload.wikimedia.org/wikipedia/commons/thumb/d/d2/Vassily_kandinsky%2C_con_l%27arco_nero%2C_1912.JPG/1280px-Vassily_kandinsky%2C_con_l%27arco_nero%2C_1912.JPG

https://uploads0.wikiart.org/images/jackson-pollock/blue-poles-number-11-1.jpg

Generative AI like StableDiffusion works very differently. It doesn't have that layer of geometric representation. In some ways this is a major weakness - it leads to the kinds of issues we've been discussing - but it also allows it to be far more versatile. You don't need to get an art expert in to teach SD what kinds of features it should be recognising and measuring in order to distinguish a Kandinsky from a Pollock. You just need to feed it a bunch of labelled images of Kandinskys and Pollocks and it will find a set of features that allow it to distinguish between them, and even to draw something at least Kandinsky-esque/Pollock-esque.

I'm obviously missing something very fundamental here - these visual tools are "trained" somehow by sampling millions of images. So why doesn't the rendering tool have Drawings 101 embedded? It's a pretty obvious thing to do, surely?

Drawings 101 of what?

We can define a syntax for characterising human forms in terms of their shape, and other variables that might be useful for building up something like an Identikit picture. But that syntax is not going to be useful when it comes to drawing anything else, and generalising it even to just the animal kingdom would be a gargantuan project requiring thousands of specialists. The attraction of SD is not that it's perfect in any one area - certainly not for human forms - but that it's versatile and that the human effort required to build it is relatively small, because so much of the work has already been done by people who've put labelled images up on the net where SD can scrape them without permission or recompense.

Grammarly has the fundamental rules of grammar embedded. What's the difference? There's a set of human anatomy rules - why aren't they in the algorithm?

Can someone explain that to me? I'm not being bloody minded here, it's how I learned to draw human figures. By following templates, examples, samples. I didn't do it a million times, sure, but at least I can count fingers.

Now draw an octopus, using only the rules you learned about drawing humans.

You do understand my cynicism? These things might be visual, but they're a long way from being art.

Echoing @alohadave: no, they're not art, they're tools.

Though "like a brush" only goes so far - they're a very complex form of tool that raises a lot of ethical and philosophical questions that brushes usually don't.

We've discussed fanfic before, and there are some parallels to that - something can be a derivative work, where the author has chosen to lean on somebody else's world-building, and still have its own creative merit. That parallel only goes so far, because the fanfic author and audience are usually well aware of just how much is borrowed vs. original, whereas in generative AI the "how much" and "from where?" are much harder to discern. But I'm happy to acknowledge that generative tools can be used in creative, artistic ways, without giving them a free pass on the ethical issues.

None of that happened with AI art, as I understand it. It's more like a toddler figuring out how to draw by learning from millions of images. So the toddler draws stuff that it thinks looks like the millions of images. But it doesn't know what any of the images are, or what any of the components or pieces in the images are. It is aware of colors and patterns. It is not programmed with any specific behavior, like "hands have 5 fingers", furthermore it is not capable of understanding what fingers are.

In particular, a toddler that has never seen another human being, or its own fingers, or indeed anything other than the images it's learning from, and has no understanding that things like fingers are especially important to its audience.

We would probably have to design an AI software system differently from the ground up to get behavior like "I know what fingers are, and how anatomy works," vs "I recognize colors and patterns and try to create similar ones "

There are various approaches one might take - e.g. use some sort of 3-D modelling approach to generate millions of images that do show well-drawn fingers, and add them to the training data, and hope that it learns a bit more about how fingers should look. But anything I can think of would be a tremendous amount of work, not foolproof, and not easily generalisable from "draw a human" to "draw an octopus".

The object of such technologies is generally to save human labour or at least move it to somewhere out of sight where the workers are cheap and expendable rather than to have to bring in a huge expert task force trying to codify every individual area of human knowledge.
 
Drawings 101 of what?
Given the major application people are using it for here, it's the human body - that's the context of my critique. Drawings 101, The Human Body.

I get your point - copying a hundred Kandinsky's and a hundred Pollocks requires a different learning set completely.

But it's still copying, averaging, morphing. It's laying a million images over the top of each other, and saying, "There you go, that's the best composite image I can come up with."

At this point, though, I'm still not seeing what you do with that.

I'm still waiting for "the value added", I guess, because today, I'm not seeing it - particularly since the goal seems to be, with human imagery at least, to get it as lifelike as possible. An optical lens already does that. Sure, we've gone from film to digits to capture the information, but the lens is the fundamental thing. Is this a new lens? No, it's not, because it sits on the shoulders of the old lenses (a deliberate analogy, obviously).

Get AI to sort plastic waste and I'll be impressed, but this human imagery stuff doesn't do anything for me - not a single soul has said, "Well, here's the point of it all, in the context of visual art and/or technology."

I probably should come back in a couple of years, when someone produces the equivalent of albumen or collodion prints. Then I'll be more interested.

Carry on :).
 
Given the major application people are using it for here, it's the human body - that's the context of my critique. Drawings 101, The Human Body.

I don't think it's designed primarily for Literotica ;-)

There is a valid criticism there that to some extent the current wave of generative AIs are solutions in search of a problem. I'm not denying that some people find them useful as creative tools in visual arts and perhaps in prose; I just don't see those fields as lucrative enough to justify the resources being invested in developing them. I expect the sponsors are more interested in commercial applications - sack your call-centre workers and replace them with GPT, that kind of thing - and that's where the disconnect between AI and reality becomes more of a problem. Recently I saw somebody talking about how he'd made an online booking with a real estate agency to inspect properties he might be interested in. When he showed up at the office at the appointed time, he was told there were no properties to see. He'd been talking to the agency's GPT-powered chatbot, which had no connection to their actual availability data, but was happy to make shit up and waste his time. That probably isn't a sustainable business model.

In the longer run, I suspect their success in legitimate applications (as opposed to "deepfake me a video of Joe Biden and Donald Trump having sex") is going to depend on how well they can be coupled to more fact-oriented systems. I know that's happening in some areas, e.g. a maths plug-in to cover for GPT's extreme weakness in that quarter. (If you think getting the fingers wrong is bad, imagine how I feel about a gazillion-dollar "AI" that can't reliably multiply 3-digit numbers!) I don't know how far that will go, or how many people are going to be content with something that looks cool as long as you never look at it closely.

I'm not exactly their biggest supporter (my opinions to date boil down to "cool toy, pity about the lying and the ethical questions and the sweatshop exploitation bits") but I would note that we're still in early days and if I was pioneering a tech like this, I might be interested in showing off the impressive side first, to draw interest and funding, before getting down to the hard and unsexy questions like "how make draw fingers good". Presumably there will be improvements in that area. I'm a bit more pessimistic than their boosters about how easily those improvements will be achieved, and I suspect a lot of businesses are going to part with their cash before realising this shiny technology isn't what they need. But then I remember blockchain ;-)

Specifically in artistic areas, where I think generative AI might be most interesting and offer most potential for human creativity is when people stop focussing on using it to emulate the kinds of art human artists were already making, and instead getting under the hood and deliberately breaking things to produce something weird and new, in the same kind of way that some photographers stopped making it their goal to reproduce the subject as faithfully as possible, and started getting creative with things like solarisation to create new and challenging images.
 
So you post an innocent thread about a naked girl and a horny octopus and it leads to all this AI depravity. You guys are such pervs 🤭.

Em
 
Last edited:
So you post an innocent thread about a naked girl and an horny octopus and it leads to all this AI depravity. You guys are such pervs 🤭.

Em
Just keeping busy, Em.

I can be relied upon for some half-baked thoughts, and Bramble can be relied upon to break it all down to the nth degree AND make it fact and empirically based.

I'm made of simpler stuff - having been a project manager in a variety of engineering disciplines for several decades, my rules of thumb boil down to, "If it looks wrong, it probably is," and, "You've forgotten about Murphy's Law
and you've definitely forgotten he's got an idiot brother!"
 
So you post an innocent thread about a naked girl and an horny octopus and it leads to all this AI depravity. You guys are such pervs 🤭.

Em

sorry sorry, I'll go back to talking about my favourite marine parasites!

I believe the next instalment is "here's why you should be glad you're not a crustacean!"
 
Oh and it’s not a thumbnail. This is what the series page looks like (on my phone at least):

IMG_4620.jpeg

Em
 
Back
Top