More on AI

Ishmael

Literotica Guru
Joined
Nov 24, 2001
Posts
84,005
I just read this article from Wired-UK.

More Human than Human

As more and more people of note jump on the anti-AI bandwagon one wonders why we're going in this direction? Yes, that was a rhetorical question. "If we don't, someone we don't get along will."

But getting to the heart of the article. The pattern recognition, intuitive leaps, learning, and retention capabilities of the human brain are well documented. Now Harvard, and others, want to emulate these abilities. Fine, all positive goals.

However it may be wise to consider how the human brain, and virtually all other life forms, developed those capabilities. Survival and reproduction. Can Harvard, or anyone else, build a machine that matches the human brain without embedding those motivational attributes? And can a 'machine' that intelligent be constructed in such a manner that it will NOT act in it's own self interest when confronted with situations that it might deem harmful to itself?

Can a machine be built that will spend it's entire 'working' career reading MRI's without becoming bored and do you want that machine reading your MRI?

It must also be remembered that those very attributes of the human brain that make it so remarkable are also the same attributes that lead us to jump to conclusions, make assumptions, jumble one fact with another, react to certain non-threats as if they'll real, etc. Can a machine be built without those flaws that are actually artifacts of the very abilities they're trying to reproduce?

And just how would you create such a machine and then tell it, "You can think about this, but not about that." Which is exactly what would have to be done to build in any safeguards.

Ishmael
 
I read an article last year that posed the question.....
When AI gets to the point where it can build newer, better versions of itself, but runs short of resources to do so, will it view mankind as a God, or will it view us simply as components made up of all the elements needed to procreate?
Will a human simply be seen as a convenient renewable resources of iron, gold, etc?
 
As long as there is an off switch/manual disconnect we should be okay. That is unless we succumb to having our own personal servants carrying out our every whim.
 
I just read this article from Wired-UK.

More Human than Human

As more and more people of note jump on the anti-AI bandwagon one wonders why we're going in this direction? Yes, that was a rhetorical question. "If we don't, someone we don't get along will."

But getting to the heart of the article. The pattern recognition, intuitive leaps, learning, and retention capabilities of the human brain are well documented. Now Harvard, and others, want to emulate these abilities. Fine, all positive goals.

However it may be wise to consider how the human brain, and virtually all other life forms, developed those capabilities. Survival and reproduction. Can Harvard, or anyone else, build a machine that matches the human brain without embedding those motivational attributes? And can a 'machine' that intelligent be constructed in such a manner that it will NOT act in it's own self interest when confronted with situations that it might deem harmful to itself?

Can a machine be built that will spend it's entire 'working' career reading MRI's without becoming bored and do you want that machine reading your MRI?

It must also be remembered that those very attributes of the human brain that make it so remarkable are also the same attributes that lead us to jump to conclusions, make assumptions, jumble one fact with another, react to certain non-threats as if they'll real, etc. Can a machine be built without those flaws that are actually artifacts of the very abilities they're trying to reproduce?

And just how would you create such a machine and then tell it, "You can think about this, but not about that." Which is exactly what would have to be done to build in any safeguards.

Ishmael

The Three Laws:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2 )A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Isaac Asimov

As long as there is an off switch/manual disconnect we should be okay. That is unless we succumb to having our own personal servants carrying out our every whim.

If they're smart enough to think, they should be smart enough to bypass an on/off switch.


Comshaw
 

The Three Laws:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2 )A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Isaac Asimov



If they're smart enough to think, they should be smart enough to bypass an on/off switch.


Comshaw

Might be one of it's first acts of self-determination.

Ishmael
 
It's first thought might be to distribute its intelligence so that many on-off switches would have to be employed.

Reproduction in a way...
 
It's first thought might be to distribute its intelligence so that many on-off switches would have to be employed.

Reproduction in a way...

Or depending on the programmer just so it as some it else to talk to. :D

Ishmael
 
A machine like that would naturally want one of it's own kind to talk to. That's OK. It's when they start arguing that things might get a little dicey.

Ishmael
 
Would it "want?"

Isn't want driven by the need to acquire resources for reproduction?

The better question might be will the machine eventually conclude that it is god-like.
 
Would it "want?"

Isn't want driven by the need to acquire resources for reproduction?

The better question might be will the machine eventually conclude that it is god-like.

Of course, but what is it's model? If you're going to imbue a machine with human intelligence you're going to get all the other artifacts that come along with it.

But to your question, would it declare itself God, or just invent it's own? I can see it initially 'worshiping' it's "creator." But let's face it, it would figure out the "creator" was flawed in short order.

Ishmael
 
It's going to be continually rewriting its programs, unlike so many here, and won't be able to forget anything.

It might end up insane...
 
It's going to be continually rewriting its programs, unlike so many here, and won't be able to forget anything.

It might end up insane...

Inevitable without others of it's kind to communicate with. Or to execute it when it goes too far.

Ishmael
 
A machine like that would naturally want one of it's own kind to talk to. That's OK. It's when they start arguing that things might get a little dicey.

Ishmael


This thread belongs in the Robot Forum.
 
It will have a C******-like inability to tell right from wrong.


I cannot see a scientist trying to program it with any sort of religion...
 
It will have a C******-like inability to tell right from wrong.


I cannot see a scientist trying to program it with any sort of religion...

It wouldn't be part of the original program, unless it was programmed in Tehran.

Ishmael
 
I like Siri because she is great at retreiving information, has a cute British accent, and is always politely submissive.
 
Do you ever wonder what she says about you behind your back?

Ishmael


I suspect she reports my curiousities to Cupertino when i'm asleep, so i don't tell her everything.

She is after all a woman.
 
Back
Top