The seductiveness of AI

This is standard practice, and I do that every day, with almost every chat. It's not that hard to coerce LLMs to always cite, corroborate. Nearly every confident response I get from a query is now accompanied by citations -- and it's learned which sources I esteem (Arxiv, IEEE, Nature, for example), and which I don't (Reddit).

Scepticism is required when using LLMs.


And of course people hallucinate all the time! The problem is one of misplaced trust in AI's authority (which is abetted by its auhtoratitve tone, admittedly).
Fair. My experience is it will fabricate Reddit posts, nature articles etc in some cases, but as long as you are corroborating and not trusting the LLM.l to do it you've got the right approach.

It's always confident. It will happily and authoritively tell you the sky is green.

People have understanding, reason and logic they can apply to any 'hallucinations' i.e. imagination, dreaming. Anything outside that is very likely mental illness, and THAT level of hallucination is more equivalent to AI, but without humans ability to understand or distinguish it from reality.
 
AI still has a practical hurdle: the immense quantity of electricity and water it takes to run the data centers. There may come a time when we have to choose between the 'benefits' of AI versus being able to have water to drink and power to run our air conditioners....
 
AI still has a practical hurdle: the immense quantity of electricity and water it takes to run the data centers. There may come a time when we have to choose between the 'benefits' of AI versus being able to have water to drink and power to run our air conditioners....

And driving up the demand and therefore price of these resources for everybody.
 
AI still has a practical hurdle: the immense quantity of electricity and water it takes to run the data centers. There may come a time when we have to choose between the 'benefits' of AI versus being able to have water to drink and power to run our air conditioners....
There's the plot for a James Bond movie right there. Or a post-apocalyptic thriller, depending how pessimistic you are.
 
I'm recalling a SciFi short story from the '70s where a form of AI had become the preferred "personal assistant" for office types - sort of similar to Siri or Alexa, but one that understood context - and it had become a normal habit to respond to the automated secretary with a purple streak of obscenities. Maybe that's where we need to be going. 😆
I wonder what ChatGPT or Grok or any of the others would say if you opened a conversation with obscenities.
 
To restate my concern from the OP, is anyone else worried that humanity will lose touch with what it means to relate to an "other," and will swirl down the drain in a flood of solipsism?

I'd say this is a concern that goes back over 200 years. Modernity. Loss of community. The fall from grace. Exit from Eden. Existential crisis. It's always with us.

My attitude tends to be one of morbid optimism: "Everything's shitty, and it always has been, but we'll get through it somehow."
 
I wonder what ChatGPT or Grok or any of the others would say if you opened a conversation with obscenities.

ChatGPT said:​

Sounds like you’re venting some frustration there — what’s going on?
 
Back
Top