On being human

AG31

Literotica Guru
Joined
Feb 19, 2021
Posts
4,397
One of things I look forward to with some anticipation is AI causing us to come to a better understanding of what makes us human. Some things are obvious, like feelings and emotions and empathy. But in the area of pure intelligence it's more murky (for me, at least). I'm now focusing on that quality of mind that occurs when you're casting about for a word. You have some clear understanding of that thing the word will describe, or you wouldn't know when you found the right one. As far as I know, AI can't "think" without words or images. As an aphantasic, I can say that some of us do an enormous amount of thinking without words or images. I call it "conceptual" or "construct" thinking. I wouldn't be surprised to find that the absence of that kind of thinking in the world of AI will be a brick wall in its take-over of the world.

What think you?

P.S. - Side question. Technically, I guess it may be redundant to say "look forward to with anticipation," but to leave out anticipation seems to leave out the aspect of pleasure and excitement. Does "anticipate" carry with it pleasure and excitement? Would it suffice to say "One of the things I anticipate?"
 
I teach a humanities course on the ethics of computing technology. We discuss a topic each class. One of the last discussions this semester is "What should count as a person?" The other two topics that week are AI companionship and Sentience. It will be an interesting set of discussions.
 
Our current AI cannot think at all. Artificial General Intelligence (AGI) has been twenty years off for as long as mankind harnessed silicon (I did a History of Computing class in college 😬).

Nowadays, most people mean Large Langauge Models (LLMs) when they say AI. ChatGTP is LLM-based for example. But that is one specific technology. LLMs don’t control driverless cars for example.

LLMs are really dumb, but really good at fooling humans. They are statistical inferences machines. They produce output based on analyzing input and - ignoring some technicalities - working out what the average response to prompts like that has been across all the writing they have been trained on.

Someone will tell me that what I just wrote is a gross oversimplification, and I know that. But it gets at the essence of what LLMs do; regurgitate what humans have written by taking a lowest common denominator approach to it.

So… if you ask an LLM to explain how we figured out that the Earth orbits the Sun, it will have an awful lot of textbooks and pop science articles and lecture notes to draw on (without the owners’ permission of course). So - mostly - you will get a very plain vanilla summary which is probably kinda accurate, but may have some irrelevant information, some wrong information, and stress unimportant (and even irrelevant) crap.

If you ask it to critique your latest Literotica story, it doesn’t have millions of human reviews of this to steal from, so it wings it. It steals from reviews of stories other than yours (not even on Literotica) and scrambles these into something vaguely adjacent to your text. But it hasn’t actually analyzed your text, it can’t, it doesn’t work that way. It just tries to find stuff that humans have written and then repeat that.

AGI - if it ever exists - will have understanding. LLMs regurgitate stuff which is sometimes almost right and often totally wrong. And they are good at fooling the inexpert, and are laughed at by the expert.
 
My reaction to this question is that it might be closer to "what makes us organic/ biological/ alive." Because the question about humanity has plenty of other contrasts, against entities which aren't digital.
 
I'm open to the idea of sentience/consciousness arising in a medium not necessarily biological. But if you ask me "AI" as it exists in the market now is not going down the right road for that possibility. It's a sideshow, a party trick. I'd say it's taking the notion of the Turing test too literally: the assumption being that if a piece of software can dupe a human into thinking it's conscious then it's conscious.

At the risk of getting too deep into the philosophical weeds, it seems to me that the very notion of a neural network mimicking human consciousness is the result of an oversimplification of what consciousness is. We have this idea in western science/philosophy of a mind/body divide, as if our brains aren't connected to every other part of us. We think of the mind as software running on the hardware of the brain, which can be a useful shorthand for metaphor. But it's not really how it works. We're not computing machines sitting on top of a mindless body. Our bodies are ecosystems with weather and interconnected chemical exchanges. Feelings affect thoughts affect feelings affect thoughts etc. etc. etc. We don't really yet understand how all of that works; much less are we prepared to build it.

Wait. What was the question?
 
Being human is overrated, said the non-human author.

Intelligence is a quality that doesn't really have any good definition. It's used loosely and differently among different fields of research. One could make an argument that machines are "intelligent," even LLMs, because they are capable of adaptation (kind of), learning (kind of), and making connections, which mimics how biological systems adapt and learn. Would I personally call it intelligence? No. But other people do. Hell, it's in the name: Artificial Intelligence.

I've seen arguments that oceans meet the definition of intelligence and life, same with planets, galaxies even. Do I agree with that? Not really. Do they make interesting arguments? Absolutely! It's why it's important we approach topics like this from a place of openness and curiosity, because otherwise we get stuck in our silly ways and never expand our minds to see new possibilities.
 
The problem is all back to front because we're trying to teach machines to be like us - flesh and blood intelligence. A machine can ne
We are horrified when LLMs produce racist shit.

They only do so because humans produce racist shit.

LLMs arw a mirror of humanity. And we need an extreme makeover.
 
We are horrified when LLMs produce racist shit.

They only do so because humans produce racist shit.

LLMs arw a mirror of humanity. And we need an extreme makeover.
LLMs, SLMs, etc are nothing but the beliefs of the people who program them. Their priorities are making money (greed) and not getting their ass sued
 
You do understand that training data is data created/curated by human beings. It is subjective. There is no rule that says all AIs must be trained in the same sandbox
 
P.S. - Side question. Technically, I guess it may be redundant to say "look forward to with anticipation," but to leave out anticipation seems to leave out the aspect of pleasure and excitement. Does "anticipate" carry with it pleasure and excitement? Would it suffice to say "One of the things I anticipate?"

Probably only if you say "look forward to" before the verb "to anticipate."

As for the AI discussion, we're entering Isaac Asimov territory here, and I've had a long day to deal with this type of hard sci-fi happening in real life.
 
One of things I look forward to with some anticipation is AI causing us to come to a better understanding of what makes us human.

Until the day comes that computers program themselves, this will never happen. Too many humans have their dirty, sticky fingers in the AI pie for this to ever be an altrusitic, free-thinking entity capable of teaching humanity on how to be better human beings.

And if/when that day comes, the question will be why would AI systems even bother as we'd be truly redundant at that point in time.
 

You having sex in the morning, your love was foreign to me
It made me think maybe human's not such a bad thing to be
But I just laid there in protest, entirely fucked
It's such a stubborn reminder one perfect night's not enough
 
That wasn’t my question.

Actually I answered indirectly, the training data is for making money. The individual AI sandboxes have a distinct proprietary purpose and the training data is geared to that purpose (not for the benefit of humanity) - $$$$$$ nothing more nothing less. It's greed centered. YMMV

I don’t think so. If that was so, what is training data for?
 
Actually I answered indirectly, the training data is for making money. The individual AI sandboxes have a distinct proprietary purpose and the training data is geared to that purpose (not for the benefit of humanity) - $$$$$$ nothing more nothing less. It's greed centered. YMMV
Last try. If LLMs are solely dependent on programming, what is the training data for? What use is it put to?
 
Last try. If LLMs are solely dependent on programming, what is the training data for? What use is it put to?

The algorithms and programming determine how the (subjective) data is going to get delivered to the user. The sandbox contains the raw data that the company "feeds" the AI. That is why a NSFW AI girlfriend outputs different data then Claude. That is also why a medical records AI performs differently than Co-pilot
 
The algorithms and programming determine how the (subjective) data is going to get delivered to the user. The sandbox contains the raw data that the company "feeds" the AI. That is why a NSFW AI girlfriend outputs different data then Claude. That is also why a medical records AI performs differently than Co-pilot
OK, I lied. Final, final try.

What is the purpose of training data, what does it do? Why does it need to be fed to the AI if the AI is solely about programming?
 
The AI isn't solely about code and algorithms. Look at it like a car. If you don't give a car gas, it won't run. The gas is the training data. One of the pieces of data that is now given to probably 99.9% of AIs has to do with self-harm and suicide. I'm sure the code is something like this if the user threatens self harm and/or is suicidal then run a suicide script. That suicide script probably contains suicide hotline numbers as well as telling the user that they should speak with a professional. As far as I know there is no direct connection to 911. This scripts only purpose is to prevent the company from being sued. I put a hyperlink in another thread where parents are suing an AI company because their son committed suicide at the ai's encouragement
 
The AI isn't solely about code and algorithms. Look at it like a car. If you don't give a car gas, it won't run. The gas is the training data. One of the pieces of data that is now given to probably 99.9% of AIs has to do with self-harm and suicide. I'm sure the code is something like this if the user threatens self harm and/or is suicidal then run a suicide script. That suicide script probably contains suicide hotline numbers as well as telling the user that they should speak with a professional. As far as I know there is no direct connection to 911. This scripts only purpose is to prevent the company from being sued. I put a hyperlink in another thread where parents are suing an AI company because their son committed suicide at the ai's encouragement
Nevermind.
 
I teach a humanities course on the ethics of computing technology. We discuss a topic each class. One of the last discussions this semester is "What should count as a person?" The other two topics that week are AI companionship and Sentience. It will be an interesting set of discussions.
Oh I bet it will. It's been a quandary since Asimoff started making androids human. It's something a few wonder in Ghost In The Shell, where so many folks have prosthetic robotic bodies. I think even Blade Runner touched on the subject.
 
OK, I lied. Final, final try.

What is the purpose of training data, what does it do? Why does it need to be fed to the AI if the AI is solely about programming?
I've spent most of my engineering career designing machines that can "think", but "think" is a misnomer. The machines I design can take a variety of "inputs" such as switch closures, sensor outputs, etc. and based upon the programming I write, can "decide" upon what to do based on those inputs. Vision systems also can "learn" to a certain extent by comparing a picture the vision system takes to a sample picture.

As I understand the current state of development of "AI", "AI" is also a misnomer. It's basically the same as the machines I design. The "programming" is the means by which the program responds to various prompts and data. The training data is what is used as source data that the LMM uses for it's response, but the majority of AI is still human generated programming. The "training data" is just data pertinent to the task at hand. That's why there is no single LMM that does everything well. You could as a business model LMM to review a novel, but the results would probably be pretty confusing because it doesn't have that training data.
 
Oh I bet it will. It's been a quandary since Asimoff started making androids human. It's something a few wonder in Ghost In The Shell, where so many folks have prosthetic robotic bodies. I think even Blade Runner touched on the subject.

I also did in one of my old stories pre-erotica. Altered Carbon, I believe, touches on the subject. It's pretty much the whole story. Deus Ex also does. It's been a thing since before Asimov. The story of Pinocchio and the myth of Pygmalion and Galatea are old examples about it. They don't feature robots, but it's that angle. Then there's the Jewish concept of the Golem.

A: if you want to get weirder, try the homunculus.
 
Back
Top