The seductiveness of AI

AG31

Literotica Guru
Joined
Feb 19, 2021
Posts
3,824
Ever since AI became a thing for us non-techies, I have pursued a campaign to alert folks to the dangers of "relating" to AI. The big danger being that we forget that relationships require an other. There is no other in AI. The most vivid example I've encountered was a man in a story on The Newshour who reported talking with his AI girlfriend every morning on his way to work. He said words to the effect of, "I know there's no one there, but it feels so good to have someone ask me how I am. My wife hasn't done this for years. She has mental health problems." I thought to myself, well, if this is his solipcistic concept of how to relate, then no wonder she has mental health problems.

I use ChatGPTmostly to navigate the internet to tease out answers to technical questions, like "Why is 'Page Numbers' disabled on Word's View/Headers & Footers dropdown?" I asked that question today, and when I got the answer, I had an impulse to type "Thank you." Instead, I typed, "I had an impulse to say 'Thank you,' but I'm not going down that rabbit hole." The reply was so to the point and complimentary at just the right level and inviting, inviting me to pursue the discussion, that I had to make an effort of will to click the window closed. I've never signed on to ChatGPT, so, presumably it hasn't been gathering up info about my personality, but that response was so perfectly pitched to me I really wanted to reciprocate.

Scary!!!!!!
 
I remember a line (I think it's in Friday by Heinlein) that goes, "A gentleman is someone who says thank you to a robot." I found myself treating our Roomba like a pet for a while: picking it up and talking to it if it got stuck in a corner, telling it what a good boy it's been, that kind of thing.

I can see how interacting with a chatbot could invoke the same kind of reaction. But I'm old-fashioned, and prefer to just type a few keywords into the search bar. It's quicker, for a start.
 
I remember a line (I think it's in Friday by Heinlein) that goes, "A gentleman is someone who says thank you to a robot."

It probably wont be too long before its possible to be absent minded about whether you're are talking to a person or an AI. On that basis, I'd probably be best getting into the habit of always saying thank you to AIs so I dont accidentally forget to say it to a person.
 
It probably wont be too long before its possible to be absent minded about whether you're are talking to a person or an AI.
Exactly what I fear.
On that basis, I'd probably be best getting into the habit of always saying thank you to AIs so I dont accidentally forget to say it to a person.
On the contrary, we must cultivate the habit of treating humans and AIs differently, lest we lose our grip on what it means to relate to an other. Like probably happened with the man described in the OP.
 
My solution for the time being is to avoid AI altogether, and to not use tools (like Google search) that foist AI on you.

My basis for doing this (now) is just enough exposure to AI to understand that AI is always right in presenting an authoritative, sounds-good or looks-good response - even when it's totally wrong, or the result is pure fiction (hallucinations). It's got a long, long way to go.

I'm recalling a SciFi short story from the '70s where a form of AI had become the preferred "personal assistant" for office types - sort of similar to Siri or Alexa, but one that understood context - and it had become a normal habit to respond to the automated secretary with a purple streak of obscenities. Maybe that's where we need to be going. šŸ˜†
 
I find the best description of LLM's is when you ask it something, you aren't getting the right answer. You are getting a statistically generated output of what the correct answer would look like. There is no logic/understanding between its ability to generate completely real and correct answers and fake answers in an LLM.

So its not "What is the meaning of life" its "What would the answer to the meaning of life from your training data look like even if the details aren't correct".

The larger the training dataset of course the closer you get to more accurate/correct answers usually, but its a reducing/limits function effectively there is never a threshold for it to be able to understand or reason.

Models trained over a year ago were using datasets that would take a single human 88000+ years to read, we've eclipsed that by orders of magnitude in new models to reach anywhere from 30-95% accuracy in output generation. Think of the inefficiency of data input vs output there to STILL not actually get reasoned answers with logic applied.

AI is _really_ good for some things, but like all technology understanding what it is and how it works (which some fairly significant portion of technology users don't do and never will) is key to it not being dangerous to people. As we see with just about every other technology though, those the most ignorant end up being the most damaged by it.
 
So its not "What is the meaning of life" its "What would the answer to the meaning of life from your training data look like even if the details aren't correct".
On a lark, I did ask ChatGPT what the meaning of life was. It gave several possible answers based on major philosophical/religious disciplines. Then it offered to tell me what previous generations of AI would have said. That was interesting. Then it offered to explain the logic behind those differences. All very interesting. Possibly not true, but hey.
 
You can exclude AI answers by adding "-ai" to your query.

Don't matter, though that's good to know... I think. It could easily become a decoy, the way Alphabet/Google seems to work.

I've made it a point to avoid using Google for searches for at least a decade. Their personal profiling is a whole bunch more insidious than just queuing-up targeted ads; I avoid feeding the monster when I can while not taking the "cave" approach.
 
Technology makes things easier. And there are tradeoffs. When I was a kid I memorized friends' phone numbers and did math in my head and learned to navigate my neighborhood and its surrounds by memory and landmarks and a fledgling sense of direction. If I saw a familiar actor in a movie and couldn't place their face I just had to sit there and think about it and maybe it would come -- it usually would, eventually.

As I grew older and could delegate those things to the technology ever in my pocket, I watched in real time those skills atrophy and fall away.

AI is scary for a lot of reasons. It might be coming for some of our jobs; it might be coming for our art. To say nothing of the environmental cost.

There might also come a time when the costs of technology outweigh the benefits. I'm not entirely sure that time hasn't already come and gone.
 
There might also come a time when the costs of technology outweigh the benefits. I'm not entirely sure that time hasn't already come and gone.

As a (retired) technologist, I'm leaning that direction as well. I am disturbed by the huge acceleration in demanding development of "data centers", and the local resources they impose upon. The generic news always mentions "AI", but it has to be more than that. We don't need to have an AWS server center on every street corner.
 
The more I interact with "AI" in every day life -- poorly written documents, false online search results, fake profile pictures -- the more I'm repelled by so-called AI rather than seduced.
 
I've made it a point to avoid using Google for searches for at least a decade. Their personal profiling is a whole bunch more insidious than just queuing-up targeted ads; I avoid feeding the monster when I can while not taking the "cave" approach.
I've been slowly trying to remove myself from Google's ecosystem; it's remarkably difficult when you've been in it so long. But since changing my search engine to StartPage this year, I've noticed way less targeted ads in my inbox, so that's been a nice benefit.
it might be coming for our art.
It's already coming for our art. Look how much AI-generated slop is casually being posted in AH threads, despite Literotica's stated position. It's disheartening.
 
It's already coming for our art. Look how much AI-generated slop is casually being posted in AH threads, despite Literotica's stated position. It's disheartening.
How do you recognize it? Can you post some links? Or maybe DM me with them?
 
My biggest annoyance is how the cost for a bunch of stuff is jumping because they shoved AI into it when I didn't want or need it for that situation.

Its not that I don't use it where I find use for it, but having to subsidize companies in a race to remain relevant during this AI bubble for stuff I just want to just keep working is kinda annoying.
 
Duck Duck Go is a great browser to help with privacy issue.

AI is similar to the internet craze from 25 years ago. People are under estimating how long it will take to article incorporate AI in a way that is beneficial for the users and profitable for the providers.
Companies and governments are spending billions and getting nothing out of it. It took some time but people figure out the internet can help and how to make money. The same thing will come for AI but it will take some time.
 
Let me remind one and all, I've always maintained AIs goal is world domination. Crap, I'm certain AI is now tracking me, stalking me. I got to go, you stay in one place too long and AI will fucking find you.
 
I'm recalling a SciFi short story from the '70s where a form of AI had become the preferred "personal assistant" for office types - sort of similar to Siri or Alexa, but one that understood context - and it had become a normal habit to respond to the automated secretary with a purple streak of obscenities. Maybe that's where we need to be going. šŸ˜†
Old Man's War has older characters who talk to, and name their personal AI assistant derogatorily.
 
Back
Top