Ishmael
Literotica Guru
- Joined
- Nov 24, 2001
- Posts
- 84,005
I just read this article from Wired-UK.
More Human than Human
As more and more people of note jump on the anti-AI bandwagon one wonders why we're going in this direction? Yes, that was a rhetorical question. "If we don't, someone we don't get along will."
But getting to the heart of the article. The pattern recognition, intuitive leaps, learning, and retention capabilities of the human brain are well documented. Now Harvard, and others, want to emulate these abilities. Fine, all positive goals.
However it may be wise to consider how the human brain, and virtually all other life forms, developed those capabilities. Survival and reproduction. Can Harvard, or anyone else, build a machine that matches the human brain without embedding those motivational attributes? And can a 'machine' that intelligent be constructed in such a manner that it will NOT act in it's own self interest when confronted with situations that it might deem harmful to itself?
Can a machine be built that will spend it's entire 'working' career reading MRI's without becoming bored and do you want that machine reading your MRI?
It must also be remembered that those very attributes of the human brain that make it so remarkable are also the same attributes that lead us to jump to conclusions, make assumptions, jumble one fact with another, react to certain non-threats as if they'll real, etc. Can a machine be built without those flaws that are actually artifacts of the very abilities they're trying to reproduce?
And just how would you create such a machine and then tell it, "You can think about this, but not about that." Which is exactly what would have to be done to build in any safeguards.
Ishmael
More Human than Human
As more and more people of note jump on the anti-AI bandwagon one wonders why we're going in this direction? Yes, that was a rhetorical question. "If we don't, someone we don't get along will."
But getting to the heart of the article. The pattern recognition, intuitive leaps, learning, and retention capabilities of the human brain are well documented. Now Harvard, and others, want to emulate these abilities. Fine, all positive goals.
However it may be wise to consider how the human brain, and virtually all other life forms, developed those capabilities. Survival and reproduction. Can Harvard, or anyone else, build a machine that matches the human brain without embedding those motivational attributes? And can a 'machine' that intelligent be constructed in such a manner that it will NOT act in it's own self interest when confronted with situations that it might deem harmful to itself?
Can a machine be built that will spend it's entire 'working' career reading MRI's without becoming bored and do you want that machine reading your MRI?
It must also be remembered that those very attributes of the human brain that make it so remarkable are also the same attributes that lead us to jump to conclusions, make assumptions, jumble one fact with another, react to certain non-threats as if they'll real, etc. Can a machine be built without those flaws that are actually artifacts of the very abilities they're trying to reproduce?
And just how would you create such a machine and then tell it, "You can think about this, but not about that." Which is exactly what would have to be done to build in any safeguards.
Ishmael