Robots: Friend or Foe?

Moridin187

I'm back, bitches!
Joined
Aug 20, 2001
Posts
1,558
Scientists at Economic Forum See Grim Future
Fri Feb 1, 1:12 PM ET
By Alan Elsner

NEW YORK (Reuters) - Scientists at the World Economic Forum predicted on Friday a grim future replete with unprecedented biological threats, global warming and the possible takeover of humans by robots.


........

Another threat posed by science revolves around the development of artificial intelligence which could eventually blur the distinction between humans and robots.

Rodney Brooks of the Massachusetts Institute of Technology said: "It is not too far-fetched to see a situation where we put implants into our brains before too long."

Brooks said humans would become more like robots as they implanted more and more technology into their bodies, while robots would be based on biological material and become semi-human in their own right.

Robots were already taking a greater role in warfare and might soon be capable of making their own battlefield decisions without human control, he said.

Good to know I'm not the only one who has trouble sleeping at night worrying about this
:rolleyes:

Seriously though, the idea of AI scares the bejesus out of me.. After we created them and gave them a mind of their own, why would they Possibly want to keep working for Us??
 
anyone ever see that old SNL fake commercial witht he guy from law & order selling "robot insurance" for senior citizens? that was the first thing that came to mind when i saw this thread title.

"robots steal old people's medicine"
 
I thought of Skynet first. Do you know there's an actual military contractor in the IT sector called Dynacorp?
 
Moridin187 said:


Good to know I'm not the only one who has trouble sleeping at night worrying about this
:rolleyes:

Seriously though, the idea of AI scares the bejesus out of me.. After we created them and gave them a mind of their own, why would they Possibly want to keep working for Us??

Science Fiction writers have been exploring this question for a bit over fifty years now -- relatively few feature rogue AI/robots taking over the world and doing away with humans.

I personally don't worry about this, because computers will have to advance about ten times as far as they've come since ENIAC and UNIVAC before true AI is even close to possible. At the rate that computers are advancing, I expect that my great-great-grandchildren might have to worry about rogue AIs.
 
I personally like the idea of cybernetic implants.... just sounds so spaceage... i dunno... who cares about AI maybe people wont be as stupid as all the book, movie etc. Who cares if we get run over by rogue AI then it happens... someone bound to make the mistake of creating one anyway (eventually)

and i really dont think it is as far off as great grandchildren... 20-40 years ish someone will come up with something. even if it is the most basic AI it'll happen soon....



what i want to hear about is cloning... that HAS happend.. and after the sheep nobody gives a shit anymore.. i mean what the hell... why does AI get more PR than cloning? AI is still concept... cloning is there....

sorry tho... midnight ramblings... ignore me if i made no sence.
 
Re: Re: Robots: Friend or Foe?

Weird Harold said:
I personally don't worry about this, because computers will have to advance about ten times as far as they've come since ENIAC and UNIVAC before true AI is even close to possible. At the rate that computers are advancing, I expect that my great-great-grandchildren might have to worry about rogue AIs.
I agree with your conclusions, but not with the reasons; having been involved with some AI when I worked on some DOD projects, I came to the conclusion that it is not hardware power that is holding AI back, it is a fundamental lack of understanding of what Intelligence is and is not.

There are quite a number of AI type problems that we have enough H/W power for, but we are crawling along in our understanding of how to form the algorithms to solve the problems:

1) Natural speech recognition is one we still don't quite have solved, and that is just recognition; where the computer can recognize certai nsounds as certain words. Understanding what those words mean when put together into a phrase is a whole different game. Understanding what the computer should then do is another game too.

2) Vision. Computer Scientists just discovered to their amazment that you can't just add cameras to a computer and expect it to see anything; most of animal vision takes place in the brain not the eye! :rolleyes:

I once attended a meeting where a presentation was given by a person working in another division of our company. He worked on flight simulator software - the kind that taught/tested fighter pilots. It was AI based and like a very expensive video game that the pilots flew against. We got a bit of a lecture on what AI could and couldn't do - and that was in the mid 80s when AI was all the rage.

AI is still very much in its infancy. A very advanced AI system still has less intelligence than the Bee Wolf:

http://www.earthlife.net/insects/images/hymenop/beewolf1.jpg

The Bee Wolf hunts bees and returns to its lair with bees for its larvae. It finds its lair in the sand using pattern recognition - recognizing landmarks around its lair, which is a tiny hole in the sand.

An entymologist once tested a Bee Wolf by moving some of the landmarks around. The Bee Wolf at first could not find its lair, so it back off a bit and flew higher until it could recognize other landmarks and then home in on where its lair should be - and it found it after the second try.

Our most sophisticated cruise missle systems cannot match this performance, and humans are a lot more intelligent than the Bee Wolf - by many orders of magnitude.

While H/W power increases will help, they are not the answer to AI's problems. We have a long ways to go and we are moving at a very slow pace, because the bottleneck is not H/W power, but software algorithms - and those depend on human understanding of what intelligence really is - something we only have the vaguest ideas about.

Our understanding of intelligence will not grow exponentially as does H/W computer power, but will grow in a linear fashion with occasional leaps of comprehension. I don't know when we will have thinking machines, but I doubt it will be in my lifetime.

I do think we are going to see hardware power continue to grow as fast as it has, probably a lot faster. Some have said we are at the limits of where Moore's Law applies, but I say we haven't seen anything yet. Quantum computers, biological computers, analog computers, massively parallel computers, nanotechnology - all of that tech is being worked on and more.

Much of that tech holds the promise not to increase the power of the hardware by ten times, a hundred times or a thousand times, but rather by a million times, maybe a billion or trillion times - and in the next few decades.

Even with all that power, if I chose to continue working writing software, I wouldn't be out of a job - because I don't think AI will have anything in the next few decades that can write software like a human can.

If you look at how far we have progressed in the software world you will realize that we have maybe doubled our capability in the time that it took the hardware world to increase its capabilities by several orders of magnitude. I don't see that rate changing anytime soon regardless of what hardware does.

At this point, the most promising AI tech I have seen that is related to hardware power, is to give a computer the tools to teach itself, and then let it learn on its own.
 
Coming from someone who has been in the robotics field for over 15 years, I don't think we have anything to worry about as far as AI taking over people. The programmers would never allow that to happen, and a machine only knows what it's been programmed to know. Maybe I'm wrong but I don't see them as a threat at all, but then again they have been my livelihood for a long time. I suppose we'll just have to wait and see how technology developes and take it from there.

ToddH:)
 
AI might be years down the line, but I think it will happen, maybe not in my life time as Shy Tall Guy said its the fact that we dont understand how our own mind works thats the problem.

As for wether or not I would fear an AI, AI's are computers with their own thoughts, not merly programed equations, but thoughts. That makes them as vulnerable to ego, pride, jealousy etc as we are. As I wouldnt put a countries nuclear arsenel in the hands of just one man, I wouldnt put it in the hands of an AI.

As for Cyberpunk tech, its already here. HRH The Queen Mother is by dictionarystandards a cyborg - human and non-human parts (she has at least one false hip).

Cloning, this is more my field working in a Genetics lab. Americans have been eating GM crops for years - had any of you noticed? Cloning is still in it's infancy but I see the technology improving around me everyday. Processes which used to take months can be done in a few days, computers sit in every corner of my lab making my work faster and easier. Cloning will come on in leaps and bounds over teh next two decades to the point were it will be possible to clone human limbs as well as anything else we want.

Should we fear these advances? Depends on who holds them. I dont fear nuclear power providing relativly clean power to a country, I dont like nuclear power built into ICBMs. Cloning could give new hearts, livers, kidneys, nervous tissue to repair you and your family. It can be used with GM to make pollution resitant crops and bigger fruiting bodies. It could be used to feed and heat the Matrix, guess thats part of the fun of meeting the future with an open mind and a smile.
 
ToddHwrd said:
Coming from someone who has been in the robotics field for over 15 years, I don't think we have anything to worry about as far as AI taking over people. The programmers would never allow that to happen, and a machine only knows what it's been programmed to know. Maybe I'm wrong but I don't see them as a threat at all, but then again they have been my livelihood for a long time.
You are corect in assuming that current computers can only do what we tell them to do. but in the future, if computers learn for themselves (which I believe is the future of AI), then these machines will be able to think for themselves and can thereby act outside any rules we give them - theoretically.

There are all kinds of ifs/ands/buts that go along with that statement, but without working out all of those ramifications, we cannot simply rely on the fact that we currently program our robots; because in the future, the robots will program themselves.
 
Astro said:
As for wether or not I would fear an AI, AI's are computers with their own thoughts, not merly programed equations, but thoughts. That makes them as vulnerable to ego, pride, jealousy etc as we are.
Not necessarily; those emotions come from biologic and evolutionary processes that will probably not be present in how AIs are built and formed.

The tendency to anthropomorphize AI and robots is pretty much a mistake - unless we engineer in emotions, AI probably won't have such things as ego, pride, jealousy, or for that matter more troublesome emotions; ambition and self survival which could steer an AI to rid itself of troublesome humans.

I personally think any future AI will be more like Data than Lore.
 
How did Sam Waterston keep a straight face trying to scare old people about robot attacks...it is one of the funniest bits SNL has ever done?

Robots can kiss my natural human ass, I've got something for'em if they show their ugly stainless steel puss around here...a waterhose....unless its one of those freaky hot femmebots fro Austin Powers....mmm give e the Elizabeth Hurley model.....

machine gun jugglies? How did I miss those, Baby?

We're definitely approaching the end times, aren't we friends?
 
lol.
and i even have two eyes :) i've noticed that all the pictures you've posted today have been G rated. my libido thanks you ;)
 
Robots are evil. This fact is well known by anyone who has ever replaced the motor or encoder inside the base of a Kawasaki spot welding robot.
 
Dang double-posting, disconnecting browser...

I think this computer is intelligent and evil.
 
Last edited:
seXieleXie said:
hehe.
a sarcastic, misanthropic, alcoholic robot. classic!
:D
By "Classic" you're referring to the derivation from Marvin the Paranoid Android from HHGTG, right?

//
Hey, you got some knowledge there, and know how to disseminate it.
In other words: Bye the bye, that's a sky high eye on AI, shy guy.
 
Shy Tall Guy said:
You are corect in assuming that current computers can only do what we tell them to do. but in the future, if computers learn for themselves (which I believe is the future of AI), then these machines will be able to think for themselves and can thereby act outside any rules we give them - theoretically.

There are all kinds of ifs/ands/buts that go along with that statement, but without working out all of those ramifications, we cannot simply rely on the fact that we currently program our robots; because in the future, the robots will program themselves.

Any thoughts on how "Asimovian Robots" can be programmed with the "Three Laws Of Robotics" if computers teach themselves? Would imbedding them in the learning algorithms shape the learning process to make robots "safe?"

The most common sci-fi scenario that results in a self-aware computer is a combination of a self-teaching "Expert program" and access to a sufficiently complex network (like the internet.)

I've read at least three short stories based on the premise that there is a self-aware program that already inhabits cyberspace -- sort of an intelligent worm/virus that moves from server to server to hide from humans trying to destroy it.

Astro said, "As for Cyberpunk tech, its already here."

Sci-Fi authors have been writing about cyborgs, direct mind-computer interfaces, rogue AIs, and self-aware computers long before "cyberpunk" was ever coined. Cyberpunk as a sub-genre of sci-fi dealing with cyber technology is only about 20 years old. the plotlines and concerns about the changes computers will make in our lives date back to the mid-thirties pulp fiction magazines. Concerns about rogue robots date back to the 19th century -- the first "robot" story was Golem written by a contemporary of Jules Verne.

The first "robot" sci-fi movie was Fritz Lang's Metropolis -- a silent movie made in the early 1920's -- and is still a classic sci-fi movie with much relevance to our future relations with technology.
 
I think the closest to actual intelligence we've come isn't expert systems, but neural networks. I can see some kind of a hybrid genetic algorithm/neural network becoming intelligent sometime, but I don't know if you could make it very Asmovian. You'd probably end up with something more reminiscent of Philip K. Dick's androids.

Weird Harold said:


Any thoughts on how "Asimovian Robots" can be programmed with the "Three Laws Of Robotics" if computers teach themselves? Would imbedding them in the learning algorithms shape the learning process to make robots "safe?"

The most common sci-fi scenario that results in a self-aware computer is a combination of a self-teaching "Expert program" and access to a sufficiently complex network (like the internet.)

I've read at least three short stories based on the premise that there is a self-aware program that already inhabits cyberspace -- sort of an intelligent worm/virus that moves from server to server to hide from humans trying to destroy it.

Astro said, "As for Cyberpunk tech, its already here."

Sci-Fi authors have been writing about cyborgs, direct mind-computer interfaces, rogue AIs, and self-aware computers long before "cyberpunk" was ever coined. Cyberpunk as a sub-genre of sci-fi dealing with cyber technology is only about 20 years old. the plotlines and concerns about the changes computers will make in our lives date back to the mid-thirties pulp fiction magazines. Concerns about rogue robots date back to the 19th century -- the first "robot" story was Golem written by a contemporary of Jules Verne.

The first "robot" sci-fi movie was Fritz Lang's Metropolis -- a silent movie made in the early 1920's -- and is still a classic sci-fi movie with much relevance to our future relations with technology.
 
Weird Harold said:
Any thoughts on how "Asimovian Robots" can be programmed with the "Three Laws Of Robotics" if computers teach themselves? Would imbedding them in the learning algorithms shape the learning process to make robots "safe?"
There are a number of ways to do this safely - with respect to AI being a danger to humans.

I am not an AI person so I am not going to speculate on the ramifications, just speculate that I believe it can be done, and be done safely, in a number of different ways. Which way will be chosen? Probably multiple different ways depending on the implementer.

I am not so worried about people not being able to implement AI safely, I would be worried more about someone who either didn't care to take proper precautions, or with malice actually implementing an AI that would harm humans.

Like most other tech, AI has the possibility to be used for great good, be implemented incorrectly and cause problems, or be used for great harm. I think as long as we don't design ourselves out of the loop we will be okay. I don't see terminator robots coming out of factories, but then it is possilbe.

As for plugging myself in, I probably would never do that; I would be afraid that some external force could control me, either with or without my knowledge, and I would rather go without the implant than risk that.:eek:
 
Hey, as soon as they can plug me in I'm ready to go! Robocop, Deathlok, doesn't matter as long as I'm "More machine now than man, twisted and evil" :devil: Muuahahahahhahaaaa!
 
heterotic said:
I think the closest to actual intelligence we've come isn't expert systems, but neural networks. I can see some kind of a hybrid genetic algorithm/neural network becoming intelligent sometime, but I don't know if you could make it very Asmovian. You'd probably end up with something more reminiscent of Philip K. Dick's androids.

I used "expert systems" because that was state of the art when most of the stories about the rise of self-aware computers were written. Most current sci-fi seems to accept that we will become comfortable with/dependent on AI systems of varying ability, but we won't have to deal with self-aware systems.

I asked about the practical implementation of Asimov's Three Laws of Robotics because every discussion I've seen so far about real world implementation says they can't be made to work reliably -- at least not in the way Asimov defined them.

Do we dare get into civil rights for androids and self-aware computer systems here?
 
heterotic said:
Hey, as soon as they can plug me in I'm ready to go! Robocop, Deathlok, doesn't matter as long as I'm "More machine now than man, twisted and evil" :devil: Muuahahahahhahaaaa!

http://www.emerchandise.com/images/p/FTR/pdTNFTR0006.jpg

http://www.emerchandise.com/images/p/FTR/pdTNFTR0001.jpg

http://www.emerchandise.com/images/p/FTR/pdTNFTR0002.jpg

http://www.emerchandise.com/images/p/FTR/pdTNFTR0005.jpg

Trust us puny humans!

http://www.emerchandise.com/images/p/FTR/pdTNFTR0004.jpg
 
Last edited:
Back
Top