Asimov's Three Laws of Robotics: Any real-life relevance?

Politruk

Loves Spam
Joined
Oct 13, 2024
Posts
18,471
In 1942 -- decades before anything we could call a "robot" existed -- SF writer Isaac Asimov introduced the Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
This was his response to SF about rebellious robots -- Asimov figured robots, when invented, would be safe by design.

And it does make a certain sense:


[TR]
[TD]The Three Laws of Robotics[/TD]
[/TR]
[TR]
[TD]https://www.explainxkcd.com/wiki/images/a/a8/the_three_laws_of_robotics.png

Only, the Three Laws are like laws of human psychology -- they only make sense at all if the robot is a strong AI. Which has not been invented -- but robots that kill humans have been invented. You could not even program one not to kill humans -- no AI yet developed could distinguish a human from a dog.

So far, there is only one law of robotics: A robot must obey its master, defined as whoever sends it command signals it recognizes as such.

Will that ever change?[/TD][/TR]
 
The tech bros have seriously overhyped AI. Mostly they just don’t want the VC spigot to be turned off, but there’s a darker agenda at work as well. If the oligarchs can convince enough rubes that machines can think and create, they can justify exploiting the human workers who do those jobs now.
 
The tech bros have seriously overhyped AI. Mostly they just don’t want the VC spigot to be turned off, but there’s a darker agenda at work as well. If the oligarchs can convince enough rubes that machines can think and create, they can justify exploiting the human workers who do those jobs now.
Asimov never did address the problem of technological unemployment. His robots operate on a narrow definition of "harm."
 
Asimov never did address the problem of technological unemployment. His robots operate on a narrow definition of "harm."
Would it have made riveting reading? We can only speculate.

AI isn't there yet. I don't even like calling it AI. There are currently "guardrails" in place to prevent, for example, ChatGPT from talking about certain things. There are also uncensored versions of AI apps available. And there are people doing their damndest to jailbreak AI whenever they can.

I do believe AI can be incredibly beneficial, even in it's current form. It has definitely cost some people their jobs. So, probably not as beneficial for them. I think once robots can do mundane chores reliably and for less than it would cost to hire someone, a lot of people are going to need a new career.

Skynet isn't happening in my lifetime. And if it does, we only have ourselves to blame.
 
Would it have made riveting reading? We can only speculate.

AI isn't there yet. I don't even like calling it AI. There are currently "guardrails" in place to prevent, for example, ChatGPT from talking about certain things. There are also uncensored versions of AI apps available. And there are people doing their damndest to jailbreak AI whenever they can.

I do believe AI can be incredibly beneficial, even in it's current form. It has definitely cost some people their jobs. So, probably not as beneficial for them. I think once robots can do mundane chores reliably and for less than it would cost to hire someone, a lot of people are going to need a new career.

Skynet isn't happening in my lifetime. And if it does, we only have ourselves to blame.
There’s a lot of wishful thinking in AI circles. Because AI can generate output that superficially resembles what human experts can produce, people naively assume it’s smarter than it is. The only thing it’s good for is confidently regurgitating crap.
 
Last edited:
Imagine Battlebots -- only the bots are not remote-controlled by human operators, they're fully autonomous.

AI would have to get at least that strong before the Three Laws could even be meaningful.
 
We also don't currently have a centralized AI "brain" that would govern and make the decisions for all AI agents, everywhere.
 
There’s a lot of wishful thinking in AI circles. Because AI can generate output that superficially resembles what human experts can produce, people naively assume it’s smarter than it is. The only thing it’s good for is confidently regurgitating crap.
According to recent data, AI has replaced a significant number of jobs. For instance, the tech sector has seen a notable impact, with over 136,831 job losses in the current year, marking the most substantial round of layoffs since 2001. Additionally, a report by investment bank Goldman Sachs suggests that AI could replace the equivalent of 300 million full-time jobs globally. Furthermore, a report by Asana indicates that employees believe 29% of their work tasks are replaceable by AI. These statistics highlight the substantial impact AI is having on employment.
That answer was provided by regurgitated crap.

These sources were cited:

https://seo.ai/blog/ai-replacing-jobs-statistics
https://www.cnbc.com/2023/12/16/ai-...but-the-numbers-dont-tell-the-full-story.html
https://www.nexford.edu/insights/how-will-ai-affect-jobs
https://www.aiprm.com/ai-replacing-jobs-statistics/
https://www.cbsnews.com/news/ai-job-losses-artificial-intelligence-challenger-report/
 
We also don't currently have a centralized AI "brain" that would govern and make the decisions for all AI agents, everywhere.
There never will be -- the brain would be a vulnerable point. The Internet was originally designed to facilitate military and governmental communication after a nuclear strike -- therefore it was made decentralized, running on servers scattered all over, rather than having a central nexus that a nuclear strike might destroy.
 
There never will be -- the brain would be a vulnerable point. The Internet was originally designed to facilitate military and governmental communication after a nuclear strike -- therefore it was made decentralized, running on servers scattered all over, rather than having a central nexus that a nuclear strike might destroy.
It's been a long time since I read the novels or even watched I Am Robot but it seems Asimov relies on this.

I agree that I don't think a global AI will ever happen.
 
It's been a long time since I read the novels or even watched I Am Robot but it seems Asimov relies on this.
No, every robot has its own positronic brain, programmed with the Three Laws, and not in communication with some central brain. The system is not based on centralization but on standardization.

Asimov wrote a lot of stories about how the Three Laws would in practice present unanticipated problems -- but he never addressed the problem of some rogue company making robots without the Three Laws. The implication was always that the Three Laws are so fundamental to the design of a positronic brain that a robot without them simply would not work at all.
 
No, every robot has its own positronic brain, programmed with the Three Laws, and not in communication with some central brain. The system is not based on centralization but on standardization.

Asimov wrote a lot of stories about how the Three Laws would in practice present unanticipated problems -- but he never addressed the problem of some rogue company making robots without the Three Laws. The implication was always that the Three Laws are so fundamental to the design of a positronic brain that a robot without them simply would not work at all.
Well, that's just silly. He also didn't live in a world where AI actually existed, so I'll cut him some slack. To be clear, I know it isn't actually intelligent. Intelligence pre-supposes a level of consciousness.

At least, that's what they want us to believe ... [/tinfoil hat]

Did he specifically mention positronic brains? I only remember those from Data, of Star Trek:TNG.
 
Did he specifically mention positronic brains?
Yes -- every robot in Asimov's stories has one. (Of course it's pure technobabble and blackboxing -- Asimov never gave any reason for using positrons instead of electrons, and never addressed the problem of matter-antimatter annihilation.)
 
Managers are replacing skilled workers with cheap AI that generates slop. Because:
There’s a lot of wishful thinking in AI circles. Because AI can generate output that superficially resembles what human experts can produce, people naively assume it’s smarter than it is. The only thing it’s good for is confidently regurgitating crap.
 
There’s a lot of wishful thinking in AI circles. Because AI can generate output that superficially resembles what human experts can produce, people naively assume it’s smarter than it is. The only thing it’s good for is confidently regurgitating crap.
Why do you believe it's wishful thinking? I don't believe it's smart at all. But it's still useful and can condense time-intensive tasks, create images, video and music, provide summaries, etc. Whether it's useful to you depends on your needs. There is still no way in hell I'll get into a self driving car.
 
That's the problem right there -- technological unemployment.
It's not my problem. Yet, anyway.

Business wants to make as much money as it can and people can be greedy, selfish bastards. Life isn't fair. We adapt or we are overcome.
 
That is your problem -- it is everybody's.
I am gainfully employed. The odds of a robot replacing me before I retire are astronomically low and not even worth considering. So, not really my problem. We are years and probably decades away from it becoming everybody's problem.
 
Back
Top