The seductiveness of AI

AI is similar to the internet craze from 25 years ago. People are under estimating how long it will take to article incorporate AI in a way that is beneficial for the users and profitable for the providers.
Companies and governments are spending billions and getting nothing out of it. It took some time but people figure out the internet can help and how to make money. The same thing will come for AI but it will take some time.
Building your own AI Agent/Expert - which are essentially highly crafted and modifiable prompts - are a useful way to employ AI and avoid hallucinations. By limiting prompts to those which can be answered, and the sources to be scanned to those that are known to be authoritative, one can save the amount of time spent doing research and extend one's domain of expertise in a way inconceivable five years ago.
 
Building your own AI Agent/Expert - which are essentially highly crafted and modifiable prompts - are a useful way to employ AI and avoid hallucinations. By limiting prompts to those which can be answered, and the sources to be scanned to those that are known to be authoritative, one can save the amount of time spent doing research and extend one's domain of expertise in a way inconceivable five years ago.
Thank for making my point. You are saying that to use AI properly, you need to be a subject expert. I fully agree.

However, AI is being priced in the market and sold to investors as the tool for everyone, especially the non subject experts.
 
Building your own AI Agent/Expert - which are essentially highly crafted and modifiable prompts - are a useful way to employ AI and avoid hallucinations. By limiting prompts to those which can be answered, and the sources to be scanned to those that are known to be authoritative, one can save the amount of time spent doing research and extend one's domain of expertise in a way inconceivable five years ago.
The illusion is that is saves you time.

The reality is even what you described will hallucinate. Every fact you learn from AI needs to be double checked against real sources no matter how good your prompt engineering is.

You can build a lot of potential knowledge on a subject using AI but you'll never actually know what is true or not in that knowledge until you source it outside in AI.
 
Thank for making my point. You are saying that to use AI properly, you need to be a subject expert. I fully agree.

However, AI is being priced in the market and sold to investors as the tool for everyone, especially the non subject experts.
AI Experts will always incorporate guiderails - in this case: a statement that 'This not legal advice; consult a lawyer'. But it will save the lawyer a lot of time. Instead of charging GBP350 per hour they'll be able to charge GBP350 per minute. I'm sure experts in other domains will be able to make similar savings.

I'm retired now, but I notice that the Courts are expecting lawyers to use AI in their research and check that their research is accurate. One or two lawyers have asked ChatGPT and fallen foul of the court, when they've cited hallucinated cases. They didn't use AI Experts. Building and validating an AI Expert is likely to be essential for professional practice in many professions. Lay persons will be able place a high degree of reliance on such Experts, but they can't yet generate a hologram to stand up and argue their case in court, and the hologram won't carry the insurance necessary, in case things go wrong.
 
Again thank you for making my case.

You are saying that what ever AI told you, you need to spend the extra time to make sure it is actually accurate.

How do you save time by doing the work twice?

My point is that the current version of AI that gives everyone hard on and wet pussy is NOT how we will ultimately use it.

You are old like me so you can remember. Think of the internet in 1999 versus now
 
The illusion is that is saves you time.

The reality is even what you described will hallucinate. Every fact you learn from AI needs to be double checked against real sources no matter how good your prompt engineering is.

You can build a lot of potential knowledge on a subject using AI but you'll never actually know what is true or not in that knowledge until you source it outside in AI.
I already know it saves me huge amounts of time and will provide accurate results to a well-formed prompt in basic ChatGPT.

'true or not true' doesn't apply to law. As between two parties one will always lose, that doesn't mean that information provided by an AI Agent was incorrect or hallucinated. It provides fodder for argument.
 
Again thank you for making my case.

You are saying that what ever AI told you, you need to spend the extra time to make sure it is actually accurate.

How do you save time by doing the work twice?

My point is that the current version of AI that gives everyone hard on and wet pussy is NOT how we will ultimately use it.

You are old like me so you can remember. Think of the internet in 1999 versus now
I think you'd understand better if you asked ChatGPT how to make an AI Agent/Expert in your area of expertise and followed the instructions.

I remember back in the late 90's a colleague creating useful legal references on CD's and attempting to sell them. People will always use creative ways to use new technology.
 
Instead of charging GBP350 per hour they'll be able to charge GBP350 per minute. I'm sure experts in other domains will be able to make similar savings.
I don't think saying "it will allow lawyers/other professional services to charge clients even more money" is the gotcha argument you think it is.
 
L
I think you'd understand better if you asked ChatGPT how to make an AI Agent/Expert in your area of expertise and followed the instructions.

I remember back in the late 90's a colleague creating useful legal references on CD's and attempting to sell them. People will always use creative ways to use new technology.
Lots of technologies were hyped and sold heavily and then faded into oblivion when people finally realized they had been sold a pipe dream and the new technology actually had little or no value to it.

No technology has ever been hyped as much as this generation of AI. Some people apparently are more susceptible to big dollar marketing campaigns.
 
The second part of my latest story is shorter than a lot of posts on this forum and is apropos of this thread.

As one commenter has put it, it's not so much erotica than a cautionary tale. And as I like to remind everyone, I really do know what I'm talking about on this subject, from a technical standpoint, anyway.

My "be very afraid" viewpoint is vehemently argued against by people a lot more knowledgable than me, but I have, on my side, the Godfather of AI, Geoffrey Hinton, and, of all people, Elon Musk.
 
AI is just going to be the next big bubble. A few useful things will survive but most of it will fall by the wayside.

I don't believe that at all. It's going to change everything, and it's inevitable, and there's no way to stop it. Denying it would just be the latest variant of Luddism.

AI accelerates the ability of human beings to process information and solve problems. There's no denying it.

The rational approach to AI is to figure out how to embrace it for what it offers (which is a lot) while minimizing its negatives (which are also significant).
 
I don't think saying "it will allow lawyers/other professional services to charge clients even more money" is the gotcha argument you think it is.
???? They'd charge the same amount of money but spend less time on research.
 
L

Lots of technologies were hyped and sold heavily and then faded into oblivion when people finally realized they had been sold a pipe dream and the new technology actually had little or no value to it.

No technology has ever been hyped as much as this generation of AI. Some people apparently are more susceptible to big dollar marketing campaigns.
It's free. It costs me nothing. Some gullible people may pay.
 
Ah. There's the rub in a nutshell, if I may scramble my metaphorical eggs.

AI will not help "The Rational Approach" gain any followers. In fact it's rapidly making rationality obsolete.

I don't understand that.

Speeding up the processing of information and obtaining faster solutions to problems are both rational goals. Rational people should look at these tools and ask, "How can we best use them to achieve these goals while minimizing the risks and problems associated with them?"

A rational person doesn't say, "This is new and scary to me. I want it to go away."

It's not going away, folks. It's not a fad or phase. It's here to stay.
 
A rational person doesn't say, "This is new and scary to me. I want it to go away."
Hmm, I think a rational person will reason about the danger. Fear is a tried-and-tested evolved behaviour to alert people of danger. Rationality helps you decide whether your fear is, literally, reasonable.

My concern about the decline of rationality is that AI (specifically AI algorithms in social media) will lead to sets of people unable to communicate outside their self-contained bubble of beliefs. But that's a whole other can of vipers.
 
You can build a lot of potential knowledge on a subject using AI but you'll never actually know what is true or not in that knowledge until you source it outside in AI.
This is standard practice, and I do that every day, with almost every chat. It's not that hard to coerce LLMs to always cite, corroborate. Nearly every confident response I get from a query is now accompanied by citations -- and it's learned which sources I esteem (Arxiv, IEEE, Nature, for example), and which I don't (Reddit).

Scepticism is required when using LLMs.


And of course people hallucinate all the time! The problem is one of misplaced trust in AI's authority (which is abetted by its auhtoratitve tone, admittedly).
 
Hmm, I think a rational person will reason about the danger. Fear is a tried-and-tested evolved behaviour to alert people of danger. Rationality helps you decide whether your fear is, literally, reasonable.

My concern about the decline of rationality is that AI (specifically AI algorithms in social media) will lead to sets of people unable to communicate outside their self-contained bubble of beliefs. But that's a whole other can of vipers.

Is "decline of rationality" a real thing?

I don't think so.

It always seems like everything is crazier than ever, until you look at how crazy things used to be.
 
I don't believe that at all. It's going to change everything, and it's inevitable, and there's no way to stop it. Denying it would just be the latest variant of Luddism.

AI accelerates the ability of human beings to process information and solve problems. There's no denying it.

The rational approach to AI is to figure out how to embrace it for what it offers (which is a lot) while minimizing its negatives (which are also significant).

Like I said, the useful stuff will stick.
 
To restate my concern from the OP, is anyone else worried that humanity will lose touch with what it means to relate to an "other," and will swirl down the drain in a flood of solipsism?
 
Back
Top