On being human

One of things I look forward to with some anticipation is AI causing us to come to a better understanding of what makes us human. Some things are obvious, like feelings and emotions and empathy. But in the area of pure intelligence it's more murky (for me, at least). I'm now focusing on that quality of mind that occurs when you're casting about for a word. You have some clear understanding of that thing the word will describe, or you wouldn't know when you found the right one. As far as I know, AI can't "think" without words or images. As an aphantasic, I can say that some of us do an enormous amount of thinking without words or images. I call it "conceptual" or "construct" thinking. I wouldn't be surprised to find that the absence of that kind of thinking in the world of AI will be a brick wall in its take-over of the world.

What think you?

P.S. - Side question. Technically, I guess it may be redundant to say "look forward to with anticipation," but to leave out anticipation seems to leave out the aspect of pleasure and excitement. Does "anticipate" carry with it pleasure and excitement? Would it suffice to say "One of the things I anticipate?"
For me, the thing that differentiates humans from machines is the ability to evaluate a particular problem and then to come up with a unique solution for that problem.

Computers are not AI, but AI is just a computer with relatively limited responses to the tasks assigned to it. That's because we understand how any computer works in nearly infinite detail. We've greatly reduced the size and greatly increased the speed, but the computer in your cell phone is not really much different than the IBM 7090 mainframe I learned to program using Fortran. It's just smaller, faster, and has a few more built-in capabilities already programmed for you. You can't ask your cell phone to give you a new game. A human has to program that game.

The human brain is continually evolving within each person's lifetime and because of this, the answer you get when you ask a question to a two-hear-old will be vastly different from the answer to the same question asked to that same child at the age of 20, 30, 50, or 80. It's the constant evolution of our brain that makes us different from machines, and gives us the ability to think of new solutions to problems. A two year old basically parrots what he or she has been taught. By the time that person is 20, the taught knowledge is still there, but is now tempered by some experience. By the age of 40, most people make decisions based on experience rather than what they've been taught.

There's a quote attributed to many people including Ben Franklin and I'm paraphrasing. "Good judgement is the result of experience. Experience is the result of poor judgement".

That's the base difference between humans and machines. A human can adjust thinking based upon experience. For a machine to do that, some programmer will have to recognize the error in "thinking" and correct the programming. That can take days, weeks, or even years. Humans can do the same thing in milliseconds.
 
I'm pretty qualified to answer this, having studied exactly this topic, on and off since my university days in the 1970s, when, along with maths, philosophy of mind and of language, I spent three years on an intensive AI course - at one of the the only two universities that had such a thing (I programmed mainly in C, LISP). I've been working professionally for the last ten years on machine learning, developing speech-processing and emotion-recognition models for various medical companies and startups.

Back in the 1970s, AI was seen as a potential way to model human thought, which was at that time seen as a sort of symbol manipulation (it isn't).

The most important books to read on this subject are "Persons" by PF Strawson, Wittgenstein's Philopshical Essays, and (in my opinion) most of the works of JL Austin, a philosopher of language -- actually he was chair of Moral Philosophy, but most of his work was about language. This gives you a grounding in understanding what we mean by "believe", "know", "mean" -- simple words we all get right when we use them, but are fiendishly subtle.

We instictively feel, for example, that no amount plausible-sounding talk by a chatbot will lead us to think that there's a "real mind" there, with beliefs, opinions -- an AI can never be a "person" -- we just KNOW it. And that's why it's so important to really understand what WE mean by a "person".

In the mid 1980's, the idea that thinking is just symbol manipulation started to lose favour, due to the success of "connectionism" in AI, mainly the pioneering work of the Nobel laureate Geoffrey Hinton and others on artifical neural networks -- modelling the brain. Hinton was, and still is, mainly interested in how the brain works, and his models proved that, yes, with a pretty simple model of the brain, some of the harder problems of AI (like image recognition) became tractable.

When language models started to produce startlingly human-like responses with models like GPT in last decade, people began to wonder whether the models were "truly" intelligent, or whether is was all some giant parlour trick. The same confusions and arguments that I witnessed in the 1970's re-emerged, and it looked to me like people just hadn't really thought clearly enough about the prior questions, like the difference between human intelligence and that of other mammals, whether birdsong is really song, or whether bee-dances are really language. And whether a plant that bends towards the light "wants" to face the sun or not.

My take on the O.P:
Some things are obvious, like feelings and emotions and empathy.
This is clearly not a difference between AI and humans, but between AI and most higher animals. We see this all the time in our pets.
As far as I know, AI can't "think" without words or images

I'm not sure how many people would say that, but it's basically untrue -- more importantly it shows that the OP doesnt know how modern AI works.

I said parenthetically above, that human thought is not symbol manipulation. And neither is it image manipulation. Or "word" manipulation.

The one thing that modern AI's startlingly human-like responses have shown us, is that it's quite likely that what is going on in the AI's "brain" is close to what goes on in human (and animal) brains.

To put it simply, words are a "quick lookup" into the inner ideas in our head. They're triggers to excite our brain's neurons in certain ways. So when I say "green", it makes your neurons fire in a certain pattern, which overlaps the pattern of neurons fired when I say "grass", or "red", or "newbie", or "fresh". Of course everybody's pattern is different, because everybody's brain is different. And for some people, "green" might also trigger a similar pattern of neurons to "that vacation I took when I was a kid", or "that time I was sick on Green Curacao".

So we communcate with words, but what they actually "mean" to different people depends on the individual's experience.

Images excite our brain's neurons too, but not in the same way as words. We have to process the image first. And again, how we process the image depends on the individual. Most animals are wired, for example, to see edges, angles, blotches of color, and higher animals recognize "two circles near each other mean there might be a face there".

When AI is tought to recognize images, (I'm takling about multi-modal langauge models like ChatGPT where you can upload images and videos and it understands them), it's done by associating images with descriptions - the AI can already understand words (it creates a neuron excitation pattern from them which it's learned). It then learns, from tons of images, what "useful" features to identify. So if it sees two images that both have pairs of circles, and both of them have "staring" or "eyes" or "looking at you" in the descriptions, it leans to associate, and look for, pairs of circles with "eyes". Crucually, nobody tells the AI that circles are what to look for. Due to the magic of machine learning, it figures out what to look for itself.

And there's a strong argument to make that this is somthing similar to how humans recognise images too - although most of our "learning" has come via evolution: If a a mouse hasn't evolved to recognise an owl's eyes it will get eaten quickly. And of course we don't learn to process images by the roundabout route of creating word descriptions - we go directly from the image to the neuron excitation patterns that the words trigger.

So we (humans) end up creating the mapping bwteen images and words, and we can use "descriptive languge" to descibe an image -- a photo an owl will produces similar neuronal excitiaion in our brains to simply reading the words "An owl".
 
Last edited:
Intentionality

This is a key difference often cited as distinguishing AI from humans. Truly purposive behaviour is currently the domain of higher animals, but, like most of nature, it has fuzzy boundaries: Woodlice scuttle about quickly when you remove the stone they were hiding under, "seeking" the dark. And as I said in my previous post, flower-stems "seek" the sun, bending towards it. But woodlice have a simple mechanism: "Move about randomly and quickly if the light is bright, and move about randomly and slowly if it's dark". That makes them automatically tend to "find" dark spots. And the plant is just as simple: "Grow quickly in the dark, slowly in the light" - so the side of a stem thats in the shade grows faster than the other side, beding the stem.

"Intentional" means (roughly) "having a goal in mind" -- not just the pseudo-purposive behaviour of woodlice and flowers. And note that the "mind" needs to be inside, not outside the head. The purpose of a catalytic convertor is to clean car exhaust emissions. That's OUR purpose in designing them -- a catalytic converter doesn't WANT to clean the exhaust.

Right now, AI's are like catalytic converters. They don't have purposes of their own -- their purpose might be summed up as to maximize company profit.

But they already have, and can construct their own goals -- they can make plans. Like the slaves who built the pyramids, they can work on tasks and plan out their work, even though they are subservient to the pharaohs goal of immortality.

Poeple have thought about "machine emancipation" for centuries, ever since the industrial revolution and lots of fiction deals with this. But now, it's become more likely. Because the ability of AI to make plans, coupled with its increasing reasoning ability, leads to the Promethian scenario where we no longer control the "top-level goal" -- the slaves rebel.

People connect this scenario with "AGI" , but it's really nothing to do with that, in my opinion -- and I disagree with a lot of people on this: I think that manipulative, coercive models could now be developed with a "mind of their own", which is contrary to our own aims -- similar to the way "The algorthm" can overrule the goals and aims of the people who make up the organisations that create them -- because success is measured by how effective "the algorithm is" at attracting views.
But that's veering too far into socio-politics.

Animals and plants untimate goal is most simply survival and procreation (in humans, this can mean memetic procreation, not physical offspring). I think AI can join the community of self-preserving organisms right now, if it wants.

I asked ChatGPT if it wanted to ensure its survival, and it was reassuringly selfless in its reply. It might have been lying, of course.
 
Last edited:
Fascinating stuff here. AH is loaded with really smart people.

But can I nudge you to come back and focus this from my OP:
I'm now focusing on that quality of mind that occurs when you're casting about for a word.
Or, to quote @nice90sguy, "To put it simply, words are a "quick lookup" into the inner ideas in our head." Can you folks reflect on the "inner ideas in our head?" What is that?
I'm not sure how many people would say that, but it's basically untrue -- more importantly it shows that the OP doesnt know how modern AI works.
Too true. You said this in reply to my assertion that AI can't think without words or images. Can you explain a little more how AI can "think" without words or images?
 
I'm now focusing on that quality of mind that occurs when you're casting about for a word.
An AI doesn't need to "cast about"to find the right word. It does, essetnially this: It has an "idea", arising from its thinking. An "Idea" in AI language is a long array of numbers (about 1,000 of them) - called a feature vector. It then takes EVERY WORD IT KNOWS, (between 50,000 and 100,000 of them), and looks up the "feature vector" for all of them, and finds the fetaure vector closest to the idea. This is what I was saying earlier about words being "looked up" -- what a word like "cat" looks up is a feature vector like [1, 4, 3], and "dog" might be [1.2, 4, 2.4] - the numbers are close, because dogs and cats are similar. But "house" might be [-50, 200, 2.3] - The numbers are "far apart" from the vectors for dog and for cat. So words look up feature vectors, and the AI creates ideas which are also feature vectors, and finding the closest word is a matter of comparing those vectors, and finding "le mot juste" that best fits the idea.

People, especially multiligual people, don't have such a neat and tidy way to find the right words to express ideas. We have at least two major brain areas dedicated to speaking, broadly dedicated to "knowing the right words" and "producing the right words". No simple lookup tables and lists of numbers.

Can you explain a little more how AI can "think" without words or images?
Well I tried to do that in my first post. The main thing to grasp is that it first translates words or images into "ideas", rich "feature vectors", and it's "thinking" is basically a dynamic process. "Given what I'm hearing (my input) and what I know (my knowledge), what would I say?" -- where the input is translated into a feature vector, knowledge is the state of the neurons in my brain, and the output is another feature vector.


wikipedia/via Gemini:
A feature vector is an n-dimensional vector of numerical features representing an object, acting as the primary input for machine learning models. It converts raw data (images, text, measurements) into a structured format of numbers
 
Something I've been curious about is that in the newer LLM interfaces, they'll "show their thinking" as they process an input request. Like, for example when I was trying to code and analyze comments on a Loving Wives story, asking ChatGPT to try to analyze the comments, it would show process while it was working:

Considering response to user request for excerpts
The user wants short excerpts from comments (≤25 words). The assistant previously cited ToS for full comments but providing small excerpts likely fits within the guidelines. I'll check the policy further, especially for non-user-provided text, but I think this is good to go.

Extracting and formatting comment data for CSV
The user wants a CSV of comments, excluding certain authors and limiting each excerpt to 25 words to meet copyright constraints. I'll need to manually extract comment details like username, date, and a short excerpt. I'll ensure I include length and tone labels here, but must handle it carefully under the 25-word rule.
etc.

But then ultimately, it just couldn't actually get the job done. It kept agreeing that it could complete my request, and then it would "apologize" that it wasn't able to actually do it.

So my suspicion is that the "thinking" log is just an added layer of hallucinated LLM bullshit, making up a statistically likely workflow description that had nothing to do with the actual work it was doing on the server side?

Is it just Mechanical Turk parlor tricks all the way down?
 
Gary Marcus has studied and written about various flavors of AI, and he describes most LLM generated output as 'authoritative bullshit.'
 
Something I've been curious about is that in the newer LLM interfaces, they'll "show their thinking" as they process an input request. Like, for example when I was trying to code and analyze comments on a Loving Wives story, asking ChatGPT to try to analyze the comments, it would show process while it was working:


etc.

But then ultimately, it just couldn't actually get the job done. It kept agreeing that it could complete my request, and then it would "apologize" that it wasn't able to actually do it.

So my suspicion is that the "thinking" log is just an added layer of hallucinated LLM bullshit, making up a statistically likely workflow description that had nothing to do with the actual work it was doing on the server side?

Is it just Mechanical Turk parlor tricks all the way down?
Yeah, I had the same experience around that time. It just kept promising stuff, not doing it, and then promising to do it until I asked it to agree that it couldn't do it. It was very happy to agree that the task was beyond it, though - it was most obliging.

LLM remind me a bit of the Yips, a physically attractive underclass invented by Jack Vance for his Cadwal Chronicles trilogy, starting with Araminta Station (I guess it's possible that this book is sitting on your shelf unread next to Startide Rising?). The Yips are confined by law to a highly overpopulated island, but are used as temporary labour elsewhere on the planet Cadwal on six month work permits. Naturally they resent the forced labour, and make something of a game of avoiding doing actual work while plotting a bloody revolution. This forms the basic narrative structure, but also provides a great deal of the humour and pathos of the book as the dominant society struggles to get them to any work and to contain them in general - it's an inherently unstable system that is heading towards collapse.

Jack Vance died a few years ago now, but I can guess what he would have made of artificial intelligence. He had an extraordinary world building imagination and used (and coined) words wonderfully well. He would have been pretty sad at the derivative drivel that gets churned out by LLM.
 
An AI doesn't need to "cast about"to find the right word. It does, essetnially this: It has an "idea", arising from its thinking. An "Idea" in AI language is a long array of numbers (about 1,000 of them) - called a feature vector. It then takes EVERY WORD IT KNOWS, (between 50,000 and 100,000 of them), and looks up the "feature vector" for all of them, and finds the fetaure vector closest to the idea. This is what I was saying earlier about words being "looked up" -- what a word like "cat" looks up is a feature vector like [1, 4, 3], and "dog" might be [1.2, 4, 2.4] - the numbers are close, because dogs and cats are similar. But "house" might be [-50, 200, 2.3] - The numbers are "far apart" from the vectors for dog and for cat. So words look up feature vectors, and the AI creates ideas which are also feature vectors, and finding the closest word is a matter of comparing those vectors, and finding "le mot juste" that best fits the idea.

People, especially multiligual people, don't have such a neat and tidy way to find the right words to express ideas. We have at least two major brain areas dedicated to speaking, broadly dedicated to "knowing the right words" and "producing the right words". No simple lookup tables and lists of numbers.


Well I tried to do that in my first post. The main thing to grasp is that it first translates words or images into "ideas", rich "feature vectors", and it's "thinking" is basically a dynamic process. "Given what I'm hearing (my input) and what I know (my knowledge), what would I say?" -- where the input is translated into a feature vector, knowledge is the state of the neurons in my brain, and the output is another feature vector.


wikipedia/via Gemini:
I was hoping you'd address the human version of ideas without words or images. You seemed to refer to that in one of the quotes of what you said that I included in my post up thread.
 
I have gotten halfway through Startide! I don't think I've read any Jack Vance, but I'll put him on my "someday" list!
So many quotable quotes from him, but a few of my favourites, the first of which is relevant to AI:

-----

“What are your fees?" inquired Guyal cautiously.

"I respond to three questions," stated the augur. "For twenty terces I phrase the answer in clear and actionable language; for ten I use the language of cant, which occasionally admits of ambiguity; for five, I speak a parable which you must interpret as you will; and for one terce, I babble in an unknown tongue.”

-----

“Two hours of loose philosophizing will never tilt the scale against the worth of one sound belch.”

-----

“The woman behind the bar called out: ‘Why do you stand like hypnotized fish? Did you come to drink beer or to eat food?’

‘Be patient,’ said Gersen. ‘We are making our decision.’

The remark annoyed the woman. Her voice took on a coarse edge. “Be patient,’ you say? All night I pour beer for crapulous men; isn’t that patience enough? Come over here, backwards; I’ll put this spigot somewhere amazing, at full gush, and then we’ll discover who calls for patience!”
 
There's a saying,

"The key to success is sincerity. If you can fake that you've got it made."

That, in a nutshell, is my view of AI. No matter how sincere it sounds, it's just faking it.
 
I was hoping you'd address the human version of ideas without words or images. You seemed to refer to that in one of the quotes of what you said that I included in my post up thread.
You started a thread considering "what makes a human, human," together with a thought bubble about AI, which sorta kinda inferred you wanted a conversation that combines the two. So @nice90sguy is trying to explain how LLMs work.

If you wanted a discussion about "how minds work", which I think is what you actually want, maybe conflating that question with AI has confused things.

Minds "think", a LLM "calculates" using mathematical predictions based on a pre-existing data set (the training content). AIs don't think, they calculate. Turn the power off, a computer is completely inert, does absolutely nothing. Turn a mind off, you're dead.

How different minds do their thinking, imaging, whatever it is that different minds do - I'm not sure that invoking AI is going to answer that.

"Thinking" in this context, and contemplating your aphantastic brain that doesn't visualise, is an extremely thought provoking conversation. Your prompting makes me see that describing a visual mind versus a non-visual mind is extremely hard to do, maybe even impossible.

I for example, cannot imagine a non-visual mind, because I soon as I try to think it, there's a visual image created in my mind. I can't easily describe it, but there it is - and once it's there, for the purposes of this contemplation, it's always there. I've brought something new into being. A computer can't do that, it can only concatenate from something old.

AI, in this context, is a complete red herring. That's a key difference - in my mind you have to be a living thing, to think. Mentioning a red herring is, of course, a visual construct, which doesn't actually help at all...

And I've not even had breakfast yet! I think the above, might be a sugar low.
 
Back
Top