Coleoidphilia cover illustration

Just remember: it's not the size of your tentacle, it's how well it suc...tions.

I'll see myself out now.
 
I’ve seen a couple of episodes - feels really dated

Em
I've read - I think it was in the 1990's - that realtors knew that kids from Oregon or wherever were moving to New York mainly because of that show. Shows you the power of suggestion, or maybe just how gullible people can be.
 
I've read - I think it was in the 1990's - that realtors knew that kids from Oregon or wherever were moving to New York mainly because of that show. Shows you the power of suggestion, or maybe just how gullible people can be.


Can't help but wonder at the shock when they discovered the job income they'd need to actually afford those kinds of apartments.
 
Can't help but wonder at the shock when they discovered the job income they'd need to actually afford those kinds of apartments.
They came anyway, turning neighborhoods into yuppie/hipster DisneyWorlds: Bushwick, Williamsburg, the East Village, parts of Harlem, parts of Bedford-Stuyvesant. The people displaced didn't disappear; they just moved further out, replacing white ethnics who were either dying out or moving even further out, to the suburbs. That was before the pandemic, so I don't know what will happen next.
 
These robots are taking our jobs!
You think? Not yet, they're not.

I'm beginning to think many folk don't see the faults in these images. Having a fairly strong visual brain, the bulk of the AI images I'm seeing in various threads here look "wrong" to me - they're either "uncanny likenesses" and therefore slightly or extremely spooky, or there's something so obviously wrong they're not worth much.

Can anyone who knows how these images "work" explain why so many of these images don't have Drawing Fundamentals in their underlying datasets? I'd have thought that would be a fairly obvious place to start - as one does when learning to draw or paint.
 
Can anyone who knows how these images "work" explain why so many of these images don't have Drawing Fundamentals in their underlying datasets?
It's because of where they got the training data from. For the base Stable Diffusion training set, about 6 million images were scraped from the web. When you think about what is out there, it's a lot of porn, professional models, advertising, anime, selfies.

The models don't understand what they are doing. Like with ChatGPT, is a lot of statistical correlations with any captions/keywords that were attached to the source images. Many images have minimal or bad information, so people are doing training on their own to try to help refine the datasets.

For as good as they are getting, they don't know why images should be constructed the way they are, and there is a fair amount of randomness in generation. When I'm making images, I make several and pick ones that look interesting or close to what I want, and discard the rest.
 
When I'm making images, I make several and pick ones that look interesting or close to what I want, and discard the rest.


I've only ever really tinkered with any of these AI art generators, more for fun than any serious attempt to create an actual character image.

I've made a few I felt were "close enough" in the sense that it resembled the character image in my head.

I think they can be useful tools to provide a visual, but I certainly wouldn't rely on them if I were looking to generate art for my stories I'd want as representations to provide readers as "my vision."
 
It's because of where they got the training data from. For the base Stable Diffusion training set, about 6 million images were scraped from the web. When you think about what is out there, it's a lot of porn, professional models, advertising, anime, selfies.
What I don't understand is why on earth the "training" doesn't include basic anatomy, Drawing 101 if you like, as the fundamental foundation upon which an image is built. Surely that's an obvious thing to do? Establish basic rules. That's the bit that's missing, as I see this stuff develop.
 
What I don't understand is why on earth the "training" doesn't include basic anatomy, Drawing 101 if you like, as the fundamental foundation upon which an image is built. Surely that's an obvious thing to do? Establish basic rules. That's the bit that's missing, as I see this stuff develop.

Spock would say, 'It's learning Jim, but not as we know it.' Be patient. The word is that it's learning fast, you won't have to be too patient.

PS: When he says, 'It's life Jim, but not as we know it.' Panic.
 
Last edited:
Spock would say, 'It's learning Jim, but not as we know it.' Be patient. The word is that it's learning fast, you won't have to be too patient.

PS: When he says, 'It's life Jim, but not as we know it.' Panic.
Frankly, I'm not too concerned about its "intelligence", because right now it's abysmally stupid. What does concern me are the humans using AI (whether it be written, visual, or audio) without the necessary checks and balances, because as we well know, humans can be abysmally stupid.
 
What I don't understand is why on earth the "training" doesn't include basic anatomy, Drawing 101 if you like, as the fundamental foundation upon which an image is built. Surely that's an obvious thing to do? Establish basic rules. That's the bit that's missing, as I see this stuff develop.

This particular brand of "AI" doesn't learn the way a human would. There's no easy way to teach it basic anatomy, beyond feeding it a ton of images drawn by people who know how and hoping it learns the right things from them. It's not even thinking in terms of "this is a body, this is a leg" let alone "legs should be this long.

For instance, closeup on BobbyBrandt's image:


View attachment 2234791

Notice how the tentacle at the right ends up becoming hair. This is something that generative tools do a lot, because the outlines of a tentacle form two approximately parallel lines, and strands of hair form parallel lines, and they don't understand that even though these particular parallel lines might have similar shapes and colours, they are different objects that shouldn't blend. In fact, the underside of that tentacle ends up turning into her hip!

Look closer, and you'll see something similar going on in the eyes. Cephalopod eyes are funny-looking (by human standards) and I don't know how great the training images are that the AI is working on. But those light-blue shapes that look like they might be reflections or perhaps the iris continue out past the eye on the left to become a stripe on its skin - again, the AI might have seen cephs with stripes on their bodies, and something of similar colouring and shape in their eyes, but it has no concept that "eye" and "body" are different things and the patterning on the eye shouldn't continue past its boundaries.
 
This particular brand of "AI" doesn't learn the way a human would. There's no easy way to teach it basic anatomy, beyond feeding it a ton of images drawn by people who know how and hoping it learns the right things from them.

Exactly. That's a fundamental flaw in the system design, and makes no sense to me. A visual generator would, I'd have thought, overlay its "constructs" over some basic templates.

It surprises me that so many people seem to say, "Great image", and appear not to see these very obvious flaws. But then, people take photos of their lunch in a restaurant and think that's something the world needs to see, and I don't get that either.
 
Exactly. That's a fundamental flaw in the system design, and makes no sense to me. A visual generator would, I'd have thought, overlay its "constructs" over some basic templates.
The 'system' designs itself. It must learn templates, a pattern of organization out of chaos, in the same way as everything else, in parallel with everything else. The only affordance provided is Yes or No.
 
The "AI" of today is not the AI of science fiction. It is a mimic program that has only two options, a one or a zero. There is nothing beyond that in the way of learning. Stringing ones and zeros together is not learning until suddenly it says, "Hey wait a minute, that isn't right." Then you have a problem.
 
Back
Top