A singularly technogeeky plot bunny

....

The tricky part then isn't making leaps, but filtering and guiding the process so that we get useful or interesting leaps. As I see it, that's pretty much what human creativity is, and it's a major field of study in modern computer science.

Agreed, and your whole post is a terrific summary.
 
Does it, though? We love to mysticise human intelligence and talk about it as if it was something magical.
Not magical, but nonetheless, remarkably potent at times. For example, it produced Beethoven's late string quartets.

But when it comes down to it, it's hard to come up with reasons to believe that a naturally-grown computer made of neurons must be inherently more capable to one made artificially of silicon. At the micro level, both are subject to the same laws of physics.
Yes, but -- that's a little like saying that chemicals in organic matter are no different than the same chemicals when they are incorporated into living processes. Superficially, that may be true. But there is a different organizing principle at work.

I am attracted to the ideas of the Russian biogeochemist Vladimir Vernadsky. His model of evolution goes through three phases: first the geosphere, when the earth was inorganic rock; then, the biosphere (he coined the term), with the emergence of life; and finally, the noösphere, with the emergence of human cognition. Each shift represents a qualitative shift. Just as life is not an epiphenomenon of inorganic matter, cognition is not an epiphenomenon of neurons. You may wish to argue whether his theory is correct. It looks pretty good to me.

The tricky part then isn't making leaps, but filtering and guiding the process so that we get useful or interesting leaps. As I see it, that's pretty much what human creativity is, and it's a major field of study in modern computer science.
I see what you are saying here. But in human creative activity, whether artistic or scientific, there is a crucial non-logical component that involves the generation and successful resolution of paradoxes. Can computers do that?
 
Last edited:
...
Yes, but -- that's a little like saying that chemicals in organic matter are no different than the same chemicals when they are incorporated into living processes. Superficially, that it is true. But there is a different organizing principle at work.
...

Ummm... yes. The same chemicals in organic matter or in living processes will only react differently if they are in a different chemical environment. Their reactivity changes depending on their context and environment. But that's no different than what we've said before - there is no question that human brains can perceive and and integrate many different types of inputs, and quickly sort through possibilities using a lot of different sensory inputs.

Our trouble in computing and neuroscience may be that we cannot quite express or "enunciate" all the rules and signals that our neurons respond to in order to generate their outputs, which will eventually translate into a decision. But rules there are, nonetheless. This is vastly oversimplified. But the next few years I think will see some pretty large leaps being made in understanding how humans do it. Once that happens, it won't take long to translate it to computers. AlphaGo is just one more step along that continuum. I don't see that it minimizes the accomplishments of humans and humanity. It will de-mythologize and demystify them.
 
Yes, but -- that's a little like saying that chemicals in organic matter are no different than the same chemicals when they are incorporated into living processes. Superficially, that may be true. But there is a different organizing principle at work.

I'm not convinced of that last. From where I stand, a living organism is no more than a very complex system of organic chemicals - complex enough that we would have great difficulty building one from scratch, or figuring out exactly how it works, but still driven by the same laws.

I am attracted to the ideas of the Russian biogeochemist Vladimir Vernadsky. His model of evolution goes through three phases: first the geosphere, when the earth was inorganic rock; then, the biosphere (he coined the term),

(Popularised rather than coined, I think? By my understanding "biosphere" originated with Eduard Suess.)

with the emergence of life; and finally, the noösphere, with the emergence of human cognition. Each shift represents a qualitative shift. Just as life is not an epiphenomenon of inorganic matter, cognition is not an epiphenomenon of neurons. You may wish to argue whether his theory is correct. It looks pretty good to me.

I haven't read Vernadsky's own work, so I'm going entirely from other people's summaries here and I may well be missing something.

From what I can see, his general argument is that the lithosphere implicitly contains the potential for the biosphere, and the biosphere contains the potential for the noösphere. It's hard to say without better familiarity with the context of his work, but that sounds like the notion of emergence, whereby small entities with simple properties interact to create surprising and complex behaviours.

Put crudely: plants and animals are an emergent property of a planet at the right temperature with the right mix of hydrogen, carbon, oxygen, nitrogen, and other components, and intelligence is an emergent property of certain kinds of meat. But we know very well that digital systems can also manifest emergence (Conway's game of Life is a popular example); it's not at all clear to me why emergent intelligence, however defined, could only arise from biological systems.

I see what you are saying here. But in human creative activity, whether artistic or scientific, there is a crucial non-logical component that involves the generation and successful resolution of paradoxes. Can computers do that?

I might need a concrete example here before I can respond to that; "paradox" is a word that can mean several different things and I'm not sure which one you're referring to here.
 
I see what you are saying here. But in human creative activity, whether artistic or scientific, there is a crucial non-logical component that involves the generation and successful resolution of paradoxes. Can computers do that?

I might need a concrete example here before I can respond to that; "paradox" is a word that can mean several different things and I'm not sure which one you're referring to here.

Frankly, I'm far more comfortable in the arts than in the sciences, so let's take an example from there, a simple metaphor. Suppose I were to become very sappy and sentimental about AlphaGo, and I said to it, "You are the sunshine of my life." I'm sure that in the first pass, AG would reject it as false statement, because AG, being very computer-smart and self-aware, knows that it is not comprised of photons. It might also know that for an individual such as myself, photons are not typically considered an important component of "my life". So the question now becomes, can AG be educated to accept this statement as both literally false and figuratively true, without compromising the metaphorical nature of the statement, such as by redefining "sunshine" to mean "joy and solace" or something along those lines? Of course, there is also the question of whether a computer can be made capable of actually experiencing "joy and solace" so as to understand how the words might be interchangeable, figuratively, with "sunshine."
 
Frankly, I'm far more comfortable in the arts than in the sciences, so let's take an example from there, a simple metaphor. Suppose I were to become very sappy and sentimental about AlphaGo, and I said to it, "You are the sunshine of my life." I'm sure that in the first pass, AG would reject it as false statement, because AG, being very computer-smart and self-aware, knows that it is not comprised of photons. It might also know that for an individual such as myself, photons are not typically considered an important component of "my life". So the question now becomes, can AG be educated to accept this statement as both literally false and figuratively true, without compromising the metaphorical nature of the statement, such as by redefining "sunshine" to mean "joy and solace" or something along those lines? Of course, there is also the question of whether a computer can be made capable of actually experiencing "joy and solace" so as to understand how the words might be interchangeable, figuratively, with "sunshine."

I'll have to duck the question of whether a computer can truly experience joy, because I don't think we have a way to answer that one even for humans. I know I can experience joy; I observe that other humans claim they do, and that it seems to be triggered by the same sort of things that trigger it in me. It could be that I'm the only conscious being in the universe and the rest are just unfeeling meat-things that do a good job of emulating joy. That seems like an overly complicated explanation, so I'm willing to take it on trust that other humans genuinely do experience joy, but I couldn't prove it.

So I'll confine myself to observable behaviour; for "understand", read "behave like something that understands" in what follows. I'm also going to use "word" generally to include symbols, short phrases, and so forth, just for brevity.

Can a computer understand that words may have more than one meaning? Yes, absolutely. At a simple level, many computer languages will support statements like this one:

IF y=z THEN x=y

The two "equal" symbols mean different things here. The first one is a question: does y equal z? The second is an imperative: "set the value of x equal to that of y". The language figures out the difference from context.

For a somewhat smarter example, let's try Wolfram Alpha.

(Mathematical aside: angles can be measured either in degrees, with 360 degrees making a full circle, or in radians, with 2 * pi = 6.283... radians to the circle.)

If I ask Wolfram Alpha to "calculate sin(2)" it assumes that I'm working in radians (and notes that assumption, so I can correct it if I'm wrong). But if I ask it to "calculate sin(5)" it assumes I'm working in degrees. There's no mathematical reason why I couldn't be interested in finding the sin of 2 degrees or of 5 radians, but it's smart enough to know that "sin(2)" is more likely to be referring to radians, and "sin(5)" to degrees. (If I ask it for "sin(5.00)" it assumes radians, even though mathematically 5 and 5.00 are the same number; it's smart enough to take that formatting as a hint.)

It's like being able to figure out that if I say "my child's temperature is 38" I'm probably talking in Celsius, but "my child's temperature is 99" probably means Fahrenheit. So, computers can already deal with situations where they have to figure out the meaning of a symbol from context.

In the examples above, a designer would have foreseen that particular ambiguity, and would have known what the possible interpretations are, which allows them to program a strategy for that specific case. So, what about the situation where we can't anticipate all the ambiguities that it might encounter, and have to deal with it more generally? Can we design a computer that's smart enough to interpret "you are the sunshine of my life" without writing anything tailored for that particular case?

It's certainly challenging, but I think it's doable with current technology. Here's a rough outline of how I might attack it (I'd need to study quite a bit before I could actually implement this!)

I'd design my program around the idea that there are different types of language. I could hardcode those classes in - e.g. define them as "literal" and "metaphorical" - but it's also possible to tell a computer "there are different types of language within English, here is a corpus of examples, go have a look for clustering and define your own categories".

For example, current pattern-recognition technologies can recognise that certain words or phrases often occur together and are closely related: "love", "marriage", "family", "divorce", etc. etc. If a computer encounters a lot of those words in proximity, it could deduce that these form an important group within the language. It's a bit like looking at people's contact lists and saying "these ten people all know one another, let's flag that as a cluster". And we can go beyond that to things like "Bob knows these ten people, and all ten of them also know Jane, so it's quite likely that Jane knows Bob even though we haven't seen direct evidence."

Looking at cues such as grammar and punctuation can allow a computer to form classifications corresponding to "formal", "informal", "dialogue", etc.

We can identify ambiguous words (phrases, ...) by looking for groups that share a common word but don't otherwise have strong links. For example:

blood pressure
heart
love
proposed marriage
happy
diet and exercise
cardiac
atherosclerosis
wedding ring
...

If we map out how these interact in a large corpus, we'll find that there are two clusters: the medical-y words, usually found together in formal language, and the feel-y words, often found in less-formal language. A few words like "heart", "heartbeat", maybe "blood pressure" and "pulse", show up in association with both clusters.

From that, our computer can use context to recognise how we're using "heart". If it shows up next to words in the medical cluster, it's likely to be a medical statement; even if our corpus never mentions "heart" and "methamphetamine" together, the latter is a medical word so a sentence where they appear together is probably talking about the medical version of "heart". A sentence like "u have my heart" gets categorised as informal, which then strongly suggests that we're talking about feelings.

So, when our computer encounters a novel sentence "you are the sunshine of my life", it might reason like this:

- word "sunshine" often shows up in combination with words from a cluster containing terms like "happy", "joy", etc. etc. (Side note: this kind of "sentiment analysis" is already big business, because brand managers want to know how people on Facebook are talking about their new product.)
- "sunshine" also often shows up in combination with e.g. photovoltaic, growing plants, etc. etc. but those words are rarely associated with "you" or "me", so we're probably not dealing with that cluster.
- statements structured along the lines of "you *word from the happy-joy-light cluster* me" tend to attract responses like "Thank you! What a *word from the nice-good-pleasant cluster* thing to say!"
- hence, it might be appropriate to reply with something like "Thank you! What a good thing to say!"

None of this is trivial. There are big challenges here; some can be dealt with by throwing a lot of computing power at the problem, some require a lot of cunning in design. For instance, a computer might need to interpret a statement that has two contextual clues pointing in opposite directions; figuring out how to handle that interaction is tough. But I don't see anything that's outright impossible, and there's already proof-of-concept for a lot of this.
 
BT, I think we may be at an impasse here. On the one hand, it seems that this may be a case where the computer's "thought" process may be reduced to something like niceness-related syntax detected, launch happy-face response. What I am looking for is something more along the lines of What a beautiful paradox. It may be possible that in the long run, the first will gradually converge on the second.

Or, it may be like the paradox of squaring the circle, which fascinated medieval thinkers. Some argued that if you take a regular polygon and keep increasing the number of sides, it will eventually converge on a circle. Nicholas of Cusa said, No. It may fool the eye, but in fact, ontologically you are getting further and further from an actual circle, which is characterized by the absence of singularities (in the normal, mathematical sense.) A circle is completely continuous, no angles to change the direction of the curvature. An attempt to mimic continuity with an infinity of vertices is doomed from the start. I can't help but suspect that an effort to do an end run around a paradox with an extremely large number of logical "corrections" will lead to a similar cul-de-sac. But I really don't know the answer.
 
Anything that helps reduce homo sapiens' awful hubris is fine by me.

I've been playing Go for years, and I don't care. I even included it in one of my stories here a few years back. I don't play it to prove how smart I am. Nor do I run to prove how fast I am.
 
Won't we feel foolish when The Singularity devours humanity within a generation? (Yeah, we can get at least a dozen good years out our new RV.) The Fermi Paradox solution is simple: other civilizations destroyed themselves, maybe via nuclear war or uncontrolled pathogens or environmental ruin or rogue warbots -- or maybe their own Singularities did the trick. Vinge's 1993 paper hints that we'll have no 2032 elections. Whew. [snide political remarks deleted]

Vinge's 1984 novel Marooned in Realtime suggests The Singularity results not merely from hyper-AI but a whole slew of developing technologies, exponential curves building on each other, but especially much faster and wider communications. Maybe digital neural implants providing effective telepathy (or hive-mind-edness) will push humanity over the edge of insane cyborg genius. Borg-brains can invent thimble-sized fusion reactors, impenetrable force and stasis fields, faster-than-light travel, time machines, whatever -- and (in the novel) everybody (except those Left Behind for various reasons) then vanishes. That is The Singularity. Destruction or transcendence? Does it matter?
 
BT, I think we may be at an impasse here.

Possibly so. I don't seem to end up agreeing with you very often, but I do enjoy the discussions.

Or, it may be like the paradox of squaring the circle, which fascinated medieval thinkers. Some argued that if you take a regular polygon and keep increasing the number of sides, it will eventually converge on a circle. Nicholas of Cusa said, No. It may fool the eye, but in fact, ontologically you are getting further and further from an actual circle, which is characterized by the absence of singularities (in the normal, mathematical sense.) A circle is completely continuous, no angles to change the direction of the curvature. An attempt to mimic continuity with an infinity of vertices is doomed from the start. I can't help but suspect that an effort to do an end run around a paradox with an extremely large number of logical "corrections" will lead to a similar cul-de-sac. But I really don't know the answer.

Indeed. The notion of convergence requires a measure, and if people aren't using the same measure they'll come to different conclusions.

I like the analogy, but let me take it a little further: agreed, no matter how many sides we add to a polygon, we'll never get a circle. But are we sure that the shape we're trying to emulate really is a circle in the first place? Or could it perhaps be another many-sided polygon?

I paint occasionally (nothing sophisticated; miniature figures for gaming). At first, those works always feel like raw materials: a piece of plastic, a piece of plastic coated in white paint, a piece of white-coated plastic with a patch of red, etc. etc. But at some point there's a transition, and suddenly I'm no longer looking at it as a combination of individual elements; even if it's not completely finished, I can see it as a coherent whole. I think human intelligence is something of that sort: a bunch of little physical systems, that have been put together to form something wonderful.

I would be interested to hear how people feel about this video (0:45-0:55). I know it's "only" a robot, but I felt very uncomfortable watching that.
 
I think we debated something about whether culture implies morality a while back?

I did a little archeology, and found that we were both participants in this free-for-all, although I wouldn't say we debated one another; it seemed that we were on the same side for the better part of the time. I was impressed that you knew who Smedley Butler was, and you seemed to make a snide comment about the British Empire, which is a sure-fire way to warm the cockles of my heart.
 
Last edited:
I would be interested to hear how people feel about this video (0:45-0:55). I know it's "only" a robot, but I felt very uncomfortable watching that.


Yes, it did me too. On thinking about it, it made me feel worse than when I crush an ant that gets into somewhere I don't want it. Which bothered me some more, thinking that the analog of the mechanical dog is eliciting more emotion than the very living ant. Hadn't thought about that before.
 
I would surmise that the discomfort comes from the association with a real dog being kicked, due to the canine shape and movements of the automaton. It is, ironically enough, somewhat Pavlovian. If the robot had been fashioned to look more machine-like, like say, a miniature Volkswagen, I don't think you would have the same emotional reflex.

Won't we feel foolish when The Singularity devours humanity within a generation? (Yeah, we can get at least a dozen good years out our new RV.) The Fermi Paradox solution is simple: other civilizations destroyed themselves, maybe via nuclear war or uncontrolled pathogens or environmental ruin or rogue warbots -- or maybe their own Singularities did the trick. Vinge's 1993 paper hints that we'll have no 2032 elections. Whew. [snide political remarks deleted]

I think that the greatest danger of the apocalypse comes from the growing dominance of Wikipedia over internet search engines. What can we learn from these robots?

Oh, and yes, nuclear war. That's a pretty serious danger, too. But I don't blame computers for that one.
 
Last edited:
I did a little archeology, and found that we were both participants in this free-for-all, although I wouldn't say we debated one another; it seemed that we were on the same side for the better part of the time. I was impressed that you knew who Smedley Butler was, and you seemed to make a snide comment about the British Empire, which is a sure-fire way to warm the cockles of my heart.

Ah, then it may be my memory that's at fault. Encroaching middle age...

Yes, it did me too. On thinking about it, it made me feel worse than when I crush an ant that gets into somewhere I don't want it. Which bothered me some more, thinking that the analog of the mechanical dog is eliciting more emotion than the very living ant. Hadn't thought about that before.

After posting this I remembered an anecdote I heard a while back: some years ago, a guy named Mark Tilden demonstrated a mine-clearing robot out at the Yuma proving grounds. It was designed like a centipede; when it found a mine it'd stamp on it with one foot, the mine would blow its foot off, and it would adapt its gait to compensate for the missing limb. Which means it moved a lot like a real centipede would if it was losing limbs one by one.

Eventually the colonel watching the demonstration called a halt because he couldn't stand to watch any more. I'm assuming you don't get to colonel by being overly squeamish, but it was just too much for him. So it's not just us here, and not just something as naturally sympathetic as a "dog".

I would surmise that the discomfort comes from the association with a real dog being kicked, due to the canine shape and movements of the automaton. It is, ironically enough, somewhat Pavlovian. If the robot had been fashioned to look more machine-like, like say, a miniature Volkswagen, I don't think you would have the same emotional reflex.

Not as easily, no. I think it could be done, with a bit of work; people are quite eager to anthropomorphise even the least-human of objects if they show human-like behaviour. (cf: R2D2, or for a VW-specific example, the "Love Bug" movies.) I suspect it's because our brains have evolved for interaction with other humans - maybe also dogs, they've been with us quite a while - and it's easier to fit other things into that framework than to develop a new one.

It's easier to tell ourself that it's just a machine tricking us with its resemblance to something living... but if we came to a new planet with sufficiently advanced robotics designed to emulate their pets, how would we tell the difference between the things that do have genuine feelings and the ones that just simulate them? Is there a difference?


That reminds me of a SF story I've been meaning to write here. Futuristic setting, couple gets engaged, then she gets posted somewhere far away with limited communication, he has to remain behind.

So they commission robot doubles, built to look just like the two of them and programmed from a brain-scan to behave like the originals (insert hand-waving here for "why don't they just send the robots?") His double goes with her, her double stays with him, so they don't get lonely.

Over time, people (bots included) change. What happens to the relationship as the doubles and the originals diverge?
 
Not as easily, no. I think it could be done, with a bit of work; people are quite eager to anthropomorphise even the least-human of objects if they show human-like behaviour.
It's very much in vogue nowadays, and it annoys me, curmudgeon that I am. The AI people are telling us that we are like computers, while the Animal Rights people are telling us that we are like animals. To me, animals are rather like computers (they have a genetically programmed response to most stimuli) and we are the exceptions, but if you say this, people start to go about "hubris" and I begin to smell misanthropy, like the "deep ecology" people or Prince Philip, the ones who pray for pandemics to cull the human herd.
 
It's very much in vogue nowadays, and it annoys me, curmudgeon that I am. The AI people are telling us that we are like computers, while the Animal Rights people are telling us that we are like animals. To me, animals are rather like computers (they have a genetically programmed response to most stimuli) and we are the exceptions, but if you say this, people start to go about "hubris" and I begin to smell misanthropy, like the "deep ecology" people or Prince Philip, the ones who pray for pandemics to cull the human herd.

Would you say that our tendency to anthropomorphise things is genetically programmed? ;-)

(Also, are you thinking of Prince Charles? Phil has plenty of flaws, but I haven't heard that particular line from him. Charles is the "green in a way that only obscenely rich people can afford to be green" one.)
 
Would you say that our tendency to anthropomorphise things is genetically programmed? ;-)
Indeed I would. But there is a new version which I have seen arise during my lifetime, where instead of human qualities being projected upon non-human creatures, it seems to be flowing in the reverse direction.

(Also, are you thinking of Prince Charles? Phil has plenty of flaws, but I haven't heard that particular line from him. Charles is the "green in a way that only obscenely rich people can afford to be green" one.)

Chuck is a doofus, without a doubt. But here's Phil:

"In the event that I am reincarnated, I would like to return as a deadly virus, to contribute something to solving overpopulation (1988)"
 
Would you say that our tendency to anthropomorphise things is genetically programmed? ;-)

(Also, are you thinking of Prince Charles? Phil has plenty of flaws, but I haven't heard that particular line from him. Charles is the "green in a way that only obscenely rich people can afford to be green" one.)

I suspect there's a flaw in the reportage.
He's very aware of the planet and how man is fucking it up.
 
Oh dear. I hadn't heard him comment on that particular topic before, but the general level of tact sounds like Phil.

It's a favorite theme for him. I think he has been quoted three different times, using essentially that same formulation. He and Prince Bernard of the Netherlands co-founded the WWF (not the wrestling federation), not so much because they are fond of critters, but because they dislike humans. And they had some other, more pecuniary motives as well.
 
Back
Top