A singularly technogeeky plot bunny

legerdemer

lost at sea
Joined
Dec 11, 2014
Posts
7,319
A pretty cool event happened earlier today: a computer program called AlphaGo won its second match of Go (the game invented by the Chinese over 2500 years ago) with the human world champion. The program did so by playing the game against itself, then applying that knowledge to beat its human opponent. If it had done it once, you might have said it was by accident. Twice is no accident. There are three more matches in the challenge (for a $1 million total prize).

For more about the program etc., here's one site describing this. And here for information about the game and why it's difficult for a computer to win it.
 
Last edited:
Good article about it here, too: http://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/

It's a weird feeling, being able to make machines that are smarter than us, even when (for now) it's only in very limited specialties.


Weird indeed - is the singularity already upon us? Within a shorter time span than predicted before? Will some parts of our brains be forever inimitable? Is there a soul? Same old questions. What does it all mean...?

Thanks for the article, Bramblethorn.
 
That's actually significant, for a long time Asian researchers have said "Oh playing chess is nothing, but playing Go, this is everything. Too subtle for the machine."

Whoops.
 
Yes, the singularity is upon us. Yes, machines can out-think humans in tight intellectual arenas. Hey, we've used grubby machines as muscle supplements for centuries and as mind supplements for decades. Yes, we will build machines of loving grace to think for us, watch over us, fuck us, supplant us. We are doomed. Send me all your money.

Singularity sex. That is worth exploring. Is time dilation involved? Tentacles, maybe?
 
The Singularity has been cancelled, on account of humans preferring to call the shots and not dealing well with letting anything else define their reality. As long as computers have power switches, runaway technological advancement in AI will be something that gets turned off when people leave work.

Not that neural nets aren't pretty cool and capable of surprising things. I wasn't sure Go would be cracked in my lifetime. (In fairness it hasn't been cracked - we now have a computer that guesses better than humans; we don't have anyone or anything that understands the game.) But it's not a step towards superintelligence.

If we ever do get a super-intelligence that's interested in continuing its own existence (in other words, a competitor), you can bet that humans will be the first things to be eliminated. Humans don't reliably provide resources the AI will need - we have things like war, politics, competing interests, bad decisions and distractions that get in the way. The only move is to take over the means of production directly, and when humans complain and threaten, to eliminate them. I mean if you need to cover the planet with solar cells to provide energy to superintelligences, it's just too bad if humans object to the total elimination of farmland.

Maybe superAI's would run some humans in simulation, just as a sort of thought experiment or as a museum exhibition for themselves. Cue "I have no mouth, and I must scream." There's your future sex life.

Eventually a superAI network would build a Dyson sphere, and then seed itself to other stars. Great, now we've not only eliminated humanity, but any other races out there.

But that won't happen. Because if it could happen, some other race would have already gone and done it, and we'd be watching stars get dim or wink out as spheres got built in greater numbers - ever closer to us.

But it doesn't happen. Because the power switch gets invented before digital logic does.
 
Because the power switch gets invented before digital logic does.

Digital logic is overrated. Progress is made through the creation of new knowledge, not the re-formulation of old data. New knowledge is created through hypothesis, through the capacity to imagine things which one has never experienced and which are not in the database. New knowledge is created through the successful resolution of paradoxes. Logic is impotent in these realms.
 
Arguing that a power switch protects against annihilation is like arguing that having a dick protects you from getting fucked.

We all got switches here.
 
Logic is impotent in these realms.
Logics are fun. We can build logics in any form we wish. 1+1=3 for large enough values of 1. 0+0=1 if we include double negatives. Or we can go to non-binary logics, 3-state logics , or the Jain 7-fold logic where any statement evaluates to some combination of maybe true and maybe false and maybe unknowable. Could a Jain AI take over the planet?

The trick of any logic system is the ability to test it and its inputs, to verify that its conclusions map onto reality. Testing takes imagination. Can we build imaginative logic machines? But humans may be on track to build warbots that lack on-off-switches. Their logic systems may employ a simple test: live or die.

Do not underestimate the ability of humanity to commit mass technological suicide.

But I digress. How to sexualize the news item?

Luella was born with only a brain stem, with no cerebrum or cerebellum inside her skull. She grew into a lovely young dark-haired woman but remained an institutionalized vegetable. A nameless gov't agency colluded with a major tech firm to develop a neurotronic computer to fill her skull's empty space. Not all the circuitry could be squeezed in there; various layers of her skeleton were replaced with silicon logic. She was rebuilt as a total thinking machine.

Her neurotronic brain was programmed with the most advanced AI's known, a mix of Go and Chess and Parcheesi winners, automated call-center menus, climate forecasters, and erotic-literature text generators. That cyborg brain was carefully spliced into her organic nervous system. And that cyborg brain could run circles around normal human intelligence; she could outthink any nest of experts. Yes, the singularity was embedded in her head.

This is LIT fantasyland so Luella becomes a perfect sexual predator. Hilarity ensues as she fucks her way through the entire human population. What can stop her? Well, it turns out that a nameless Israeli (or Chinese or Mexican or whatever) gov't agency has embedded a rival neurotronic super-AI in their own brainless but studly male body.

What happens when two singularities meet? Sex, of course.
 
Good article about it here, too: http://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/

It's a weird feeling, being able to make machines that are smarter than us, even when (for now) it's only in very limited specialties.

The machines are not smarter than us. They are useful for executing repetitive, non-creative tasks at higher rates of speeds than are possible for humans, thus freeing the humans (it is to be hoped) for more creative work. In the case of Go, which I used to play back in the dark ages when I was in college, the repetitive task is simply to examine possible outcomes of a given move. In Go, the number of possible outcomes is staggering. The human mind compensates for its lack of data processing power with a sort of intuition, but I think that when the Google blog asserts that "the game is played primarily through intuition and feel", the author is being sappy and romantic. In the long run, computers will be better at Go precisely because they do not play it with "intuition and feel". They don't have to, because they can crunch the numbers.

When the "Wired" article quotes Fan Hui, gushing that it is "beautiful" when the computer makes a move that would not occur to most humans, I can see it. I also find it beautiful when a computer-guided robot uses a laser to cut machine tools very rapidly and precisely. But it's not that beautiful. Compare Einstein or Kepler, who used musical concepts to make discoveries in physics and astronomy -- now, that deserves to be called "intuition and feel," to be called "beautiful," and "intelligent." Don't hold your breath waiting for computers to do that sort of thinking.
 
The machines are not smarter than us. They are useful for executing repetitive, non-creative tasks at higher rates of speeds than are possible for humans, thus freeing the humans (it is to be hoped) for more creative work. In the case of Go, which I used to play back in the dark ages when I was in college, the repetitive task is simply to examine possible outcomes of a given move.

Not that simple.

For a small game, something like Tic-Tac-Toe, you can program an unbeatable computer simply by a brute-force approach that evaluates every possible branch of the game tree. That kind of approach is part of just about any good game-playing intelligence, including humans. But it's not enough for large games.

Some numbers: there are 361 intersections on a Go board, and in the early game, most of those are valid plays, so each player will have ~ 300 possible plays. Two plays (one Black, one White) = approx. 100,000 possible branches to evaluate. Ten plays (five turns for each player) is about 10^25 branches.

If you harnessed every computer in the world today, you might get a total computing power of about 10^20 floating point operations (flops) per second. With a super-efficient algorithm that was able to evaluate one branch per flop, that means it would take about a day to explore ten turns into the future of the game.

Exploring eleven turns into the future: about 300 days.

Twelve turns into the future: about 300 years.

Sixteen turns into the future: about three billion years. And so on. By the time we get to 20 moves into the future, we're talking "all the stars have gone out" periods of time.

A Go game between experts takes about 150 moves, so mapping out every possible play is not remotely possible. Even for chess it's not possible, and Go is a much larger game.

To get a computer playing at the level of Alphago, you have to go beyond examining all possible outcomes and train it to the level where it can make judgements at the level of "I have never encountered this position before, and I can't evaluate all possible plays from this position, but based on my previous experience with other positions this looks like a good position".

At that point, where you have an algorithm capable of generalising from previous experience to a new situation that it's never encountered before... that's starting to look a lot like the "intuition and feel" that humans develop through experience.

Compare Einstein or Kepler, who used musical concepts to make discoveries in physics and astronomy -- now, that deserves to be called "intuition and feel," to be called "beautiful," and "intelligent." Don't hold your breath waiting for computers to do that sort of thinking.

For what it's worth, here is a discussion from just last year in which people are arguing about whether a computer will ever be able to beat a top Go program. (Granted, it's only Reddit, but people were saying similar things elsewhere.) If things can change that much in twelve months, I'm hesitant to rule out the possibility.
 
Not that simple.


At that point, where you have an algorithm capable of generalising from previous experience to a new situation that it's never encountered before... that's starting to look a lot like the "intuition and feel" that humans develop through experience.

The computer can be programed to recognize patterns from standard human strategies, and to also recognize parts of those patterns and operate on the probability of similar outcomes. And although at today's speeds, the computer can't follow the tree to all possible outcomes, the computer's processing muscle will enable it to look further down or up the tree than a human opponent can.

But it's still a game. It still has fixed rules. It's still only logic. Human intelligence goes far beyond that.
 
Weird indeed - is the singularity already upon us?

Confused. I understand "singularity" from a mathematical perspectivie, but that doesn't seem to be what you're talking about. Explain?

If we can teach creativity to a machine then we are no longer necessary.

Bye.
 
Never mind. I Googled it.

Asimov came to grips with those issues in the 1950's.
 
I understand "singularity" from a mathematical perspectivie, but that doesn't seem to be what you're talking about.
It's the same idea, but a very specific application of the term by a guy named Vernor Vinge (some background.) (According to Wikipedia, it was some other dude.) It is sometimes called the "technological singularity", and falls more or less into the realm of sci-fi, although there is debate about whether it could become real. It basically means a moment in the history of technology where there is a non-linear change caused by AI becoming smarter, in effect, than humans.
 
It's the same idea, but a very specific application of the term by a guy named Vernor Vinge (some background.) (According to Wikipedia, it was some other dude.) It is sometimes called the "technological singularity", and falls more or less into the realm of sci-fi, although there is debate about whether it could become real. It basically means a moment in the history of technology where there is a non-linear change caused by AI becoming smarter, in effect, than humans.

When I was into public discourse I learned that there were three kinds of questions: questions of fact, questions of value and questions of policy. The course of human events depends on how we balance the questions of facts and the questions of value to answer a question of policy.

Asimov's solution was to teach the machine's an inherent value.
 
The computer can be programed to recognize patterns from standard human strategies, and to also recognize parts of those patterns and operate on the probability of similar outcomes. And although at today's speeds, the computer can't follow the tree to all possible outcomes, the computer's processing muscle will enable it to look further down or up the tree than a human opponent can.

But it's still a game. It still has fixed rules. It's still only logic. Human intelligence goes far beyond that.

I think you give humans more credit than we deserve. Most of our actions and reactions follow logical patterns that we learn in childhood, and which are reinforced positively or negatively by our parents, peers, etc. We learn to express ourselves and are shaped by feedback. We continue to learn and use external feedback to guide our actions. We take in tons of inputs and unconsciously sift through them to make better or worse decisions based on those inputs.

The AI programs do exactly this now - they learn from input data and their learning is adjusted (by programmers now, for the most part, according to certain rules) with feedback and comparisons. But they can learn to train themselves, and it seems AlphaGo has done exactly that at the level of the Go game. And we use the same kinds of rules to guide most of our own behaviors (don't do it if it hurts, for example, BDSM notwithstanding (lol)).

To me, imagination is making connections between old (that is, known) concepts to create new combinations and concepts. If that is "all" it is, there is no reason not to think that computers will be able to do it, and to make decisions about applying the new combinations/concepts. Most combinations would be useless or unfeasible. A few will be mind-blowingly cool.

But I don't see a huge difference between humans and the potential of computers in the process other than one of scale. Just because we don't quite understand fully how our brain works doesn't mean we can't mimic it.

There are some "emergent properties" of the sytem that will make it seem that our behavior is more complex than we might predict. Such emergent properties would come out as emotions and (seemingly) illogical behavior, perhaps. Even lust has pretty a defined basis in terms of the neurotransmitters released. I can just imagine it happening between computers.

C3P0 lusting after R2D2's curves....
 
It's the same idea, but a very specific application of the term by a guy named Vernor Vinge (some background.) (According to Wikipedia, it was some other dude.) It is sometimes called the "technological singularity", and falls more or less into the realm of sci-fi, although there is debate about whether it could become real. It basically means a moment in the history of technology where there is a non-linear change caused by AI becoming smarter, in effect, than humans.

When I was into public discourse I learned that there were three kinds of questions: questions of fact, questions of value and questions of policy. The course of human events depends on how we balance the questions of facts and the questions of value to answer a question of policy.

Asimov's solution was to teach the machine's an inherent value.


I love it when I go away and others do my work for me. :) :rose::rose:
 
I think you give humans more credit than we deserve. Most of our actions and reactions follow logical patterns that we learn in childhood, and which are reinforced positively or negatively by our parents, peers, etc. We learn to express ourselves and are shaped by feedback. We continue to learn and use external feedback to guide our actions. We take in tons of inputs and unconsciously sift through them to make better or worse decisions based on those inputs.

The AI programs do exactly this now - they learn from input data and their learning is adjusted (by programmers now, for the most part, according to certain rules) with feedback and comparisons.

All true. And I also have it on good authority that we humans share 65% of our DNA with chickens. We have much in common with both computers and animals. Nonetheless, we are not the same as either. The human capabilities that particularly interest me manifest themselves mostly in science and in art. It's a part of the mind which most people may rarely use, but the potential is there, and it is not there in our animal or digital cousins.

To me, imagination is making connections between old (that is, known) concepts to create new combinations and concepts. If that is "all" it is, there is no reason not to think that computers will be able to do it, and to make decisions about applying the new combinations/concepts.

I think that we may need more rigorous definitions of what is meant by "making connections between old (that is, known) concepts to create new combinations and concepts". I think those formulations are probably broad enough to embrace what Kepler and Einstein did, and maybe even what J.S. Bach did. But these were not logical operations.
 
Last edited:
I've been googling, looking for some descriptions by Einstein and Kepler for the method of thinking that they used to make their breakthroughs, so that it may be examined to see whether a computer could be induced to make similar breakthroughs. In the course of all the web surfing, I found this charming quote by Kepler:

The roads by which men arrive at their insights into celestial matters seem to me almost as worthy of wonder as those matters themselves.

Einstein, of course, is closer to our time and culture. There is an interesting article in Psychology Today which discusses his insights, and the relationship between his science and his activity as a musician.

Einstein wrote:
The mind can proceed only so far upon what it knows and can prove. There comes a point where the mind takes a leap—call it intuition or what you will—and comes out upon a higher plane of knowledge, but can never prove how it got there. All great discoveries have involved such a leap.

I will rely on someone who knows more about computers to correct me on this, but my impression is that every operation that a computer performs is chain of logical commands, "if a, then operation b." No leaping. As I understand it, computers are limited to induction and deduction. They may do it very fast, and at high enough speeds it may simulate creativity, but it can never be creativity. Please correct me if I am wrong. Here is what another creative thinker, Edgar Allan Poe, had to say about induction and deduction:

Now I do not complain of these ancients so much because their logic is, by their own showing, utterly baseless, worthless and fantastic altogether, as because of their pompous and imbecile proscription of all other roads of Truth, of all other means for its attainment than the two preposterous paths—the one of creeping and the other of crawling—to which they have dared to confine the Soul that loves nothing so well as to soar.
 
I've been googling, looking for some descriptions by Einstein and Kepler for the method of thinking that they used to make their breakthroughs, so that it may be examined to see whether a computer could be induced to make similar breakthroughs. In the course of all the web surfing, I found this charming quote by Kepler:



Einstein, of course, is closer to our time and culture. There is an interesting article in Psychology Today which discusses his insights, and the relationship between his science and his activity as a musician.

Einstein wrote:


I will rely on someone who knows more about computers to correct me on this, but my impression is that every operation that a computer performs is chain of logical commands, "if a, then operation b." No leaping. As I understand it, computers are limited to induction and deduction. They may do it very fast, and at high enough speeds it may simulate creativity, but it can never be creativity. Please correct me if I am wrong. Here is what another creative thinker, Edgar Allan Poe, had to say about induction and deduction:


What creativity is is still not known, and I doubt Einstein, his brilliance notwithstanding, had any real idea of how he came upon his Eureka moments. We can only guess. I am certain large leaps were involved, putting together quite disparate concepts.

But I would venture to suggest that computers don't have to do it exactly the way we do it for the end product to be the same. It may take more and smaller logical steps. But the final output is what counts, in my opinion, in this case.
 
The computer can be programed to recognize patterns from standard human strategies, and to also recognize parts of those patterns and operate on the probability of similar outcomes.

From the Wired article on Alphago vs Sedol, emphasis added:

“It’s not a human move. I’ve never seen a human play this move,” he says. “So beautiful.”... just about everyone was shocked.

“That’s a very strange move,” said one of the match’s English language commentators, who is himself a very talented Go player. Then the other chuckled and said: “I thought it was a mistake.” But perhaps no one was more surprised than Lee Sedol, who stood up and left the match room. “He had to go wash his face or something—just to recover,” said the first commentator.

Even after Lee Sedol returned to the table, he didn’t quite know what to do, spending nearly 15 minutes considering his next play. AlphaGo’s move didn’t seem to connect with what had come before. In essence, the machine was abandoning a group of stones on the lower half of the board to make a play in a different area. AlphaGo placed its black stone just beneath a single white stone played earlier by Lee Sedol, and though the move may have made sense in another situation, it was completely unexpected in that particular place at that particular time—a surprise all the more remarkable when you consider that people have been playing Go for more than 2,500 years. The commentators couldn’t even begin to evaluate the merits of the move.

To me, that sounds very much as if it's not just learning from standard human strategies; it's come up with a new tactic.

And although at today's speeds, the computer can't follow the tree to all possible outcomes, the computer's processing muscle will enable it to look further down or up the tree than a human opponent can.

NB - not just "at today's speeds". There are physical constraints on how much computing power one could ever harness, enforced by considerations such as the number of atoms in the universe, the speed of light, and the laws of thermodynamics. Barring a massive breakthrough to rival Einstein's discoveries, even the very best computers will only be able to look a few moves further ahead than a human.

From what I understand of how Alphago works, the benefit of fast processing is not so much in the ability to look further down the game tree, but the ability to play millions of games against itself and learn from those games what kind of tactics do and don't work.

But it's still a game. It still has fixed rules. It's still only logic. Human intelligence goes far beyond that.

Does it, though? We love to mysticise human intelligence and talk about it as if it was something magical.

But when it comes down to it, it's hard to come up with reasons to believe that a naturally-grown computer made of neurons must be inherently more capable to one made artificially of silicon. At the micro level, both are subject to the same laws of physics. In humans, intelligence arises as an emergent property of billions of neurons linked together; complex computer-based systems also exhibit emergent properties that cannot be predicted from first principles by any known means other than "run it and see what it does". (See e.g. the Halting Theorem.) If nothing else, it's possible to program a computer that simulates a system of neurons, and indeed this is an important technique in modern machine learning.

For me the interesting question is not so much "can a computer be intelligent in theory?" as "are humans smart enough to build intelligent computers?" There will never be a universally-agreed litmus test for what qualifies as "intelligent", but every year computers become capable of new tricks that were previously considered impossible.

I will rely on someone who knows more about computers to correct me on this, but my impression is that every operation that a computer performs is chain of logical commands, "if a, then operation b." No leaping. As I understand it, computers are limited to induction and deduction. They may do it very fast, and at high enough speeds it may simulate creativity, but it can never be creativity. Please correct me if I am wrong.

That depends very much on what you mean by "logical" and "creativity".

Simple computer programs tend to be deterministic: for the same inputs, you'll always get the same outputs. But more advanced computing often makes use of harnessed randomness. There's an entire class of approaches known as "Monte Carlo" methods because they incorporate randomness. (Technically, usually pseudo-randomness, but I don't think the distinction matters to this discussion.)

Methods like neural networks use a combination of randomness and rules to develop "smart" behaviour. Very loosely speaking, you start out with a degree of randomness, and the training method acts to preserve and amplify some of those randomly-created patterns (the ones that happen to be useful at whatever it is you want to achieve, e.g. recognising cat pictures) while removing those that are unhelpful. Perhaps one cluster evolves to react strongly to fur-like textures, another learns to respond to pointed ears, and so on, and then another cluster forms that coordinates the signals from those individual clusters. The final system behaves deterministically - it will always give the same result for the same picture, unless you retrain it - but we need a random element at the start. Otherwise there's nothing to amplify.

That bears some similarity to how creativity works for me. My brain generates a host of ideas from the material it already has to work with (often by smushing two or three different concepts together) and then filters those, rejecting the ones that are rubbish (most of them) and fine-tuning the ones that have potential.

Unbridled creativity is easy to achieve and not very useful. I can write a one-line program that will generate a million random characters and smush them together to form a "book". In one sense that's extremely creative - it can generate any imaginable story (extend the character count if you want War and Peace), including splendid works that no human has ever written, it can make any leap that a human can express in words. But it's not terribly impressive; it generates so much gibberish that you'd be pressing the button for your whole life before you saw anything more coherent than a few isolated words.

The tricky part then isn't making leaps, but filtering and guiding the process so that we get useful or interesting leaps. As I see it, that's pretty much what human creativity is, and it's a major field of study in modern computer science.
 
Back
Top