Care to share examples of AI slop you've encountered?

AG31

Literotica Guru
Joined
Feb 19, 2021
Posts
4,100
The danger you've now got is that you won't necessarily know when ChatGPT gives you a rubbish answer. Despite what people say about the rarity of hallucinations, I've often seen the Google summary at the top of a page simply wrong.
I think most people agree that AI can give you inaccurate answers. I'd like to see examples of those inaccurate answers. I want to get more and more educated about AI's uses (such as instructions on navigating support websites) and failures. Here are two that I bumped into this week.

I'd gone to our vacation place without a certain cookbook, so I went to ChatGPT to ask for the New York Times Menu recipe for Laurel Rice. In the first response, it claimed to be giving me a recipe from the New York Times cookbook (not Menu... important), but there were no bay leaves. That's why it's named "laurel."

In the second it added bay leaves, apologized for not referencing the NYTM cookbook, and gave me a recipe that included cream.

In the final try it seemed to give me the recipe I recognized.

****************
I'm reading a 997 page mystery, and I couldn't remember why a certain suspect had been rejected. I never did get a satisfactory response. It's replies contained weird combinations of specifics (it knew the names of the characters) and generalizations, "physical characteristics didn't match" was an example. I was left unsatisfied.
 
1766279861973.png

You can see your thread in a tab at the top. I took this screenshot just now. 'Narwhal' is not spelled 'narwhale.' That's Gemini. Here's ChatGPT:

1766279998315.png

And a story from work -- I was using OpenAI's Whisper voice-recognition LLM to generate subtitles for a piece of content related to HEC-HMS, which is the Army Corps of Engineers hydrologic modeling software. Each time my instructor said "HEC-HMS" pronounced 'heck-h-m-s', the LLM rendered it in text as 'sex hims.' I had to spend quite a bit of time correcting it before publication.

I've posted examples of what Grok does before but it's quite difficult to talk about what's going on there without breaching the no-politics rule.
 
Last edited:
Here's another one, based on a somewhat famous prompt:
1766280551092.png

Not helpful, Gemini! And this isn't helpful either! (ChatGPT is able to get the correct answer to this.)

1766280688266.png
 
Ask Grok for the lyrics to Neil Diamond's song "Glory Road" and it'll tell you the song is from his "Touching You Touching Me" album, which is incorrect, and give you totally bogus lyrics. If you then correct Grok and say the song is on his "Brother Love's Traveling Salvation Show" album, it will apologize and then give you the correct lyrics.
 
Might this have been a misdirected response? Daughter? Teething?
You've mentioned that you use AI to check things, whereas before you talked to your daughter (or daughter in law, can't recall). By your own admission, you're now using something that is wrong two out of three times, which is crazy.
 
Since Jules Verne, it's a well established fact that the capital of France is Calais.
Yet dumb AI says it's Paris!

🥴
 
I asked Bing for a list of Chinese Demonesses, and Co-Pilot jumped in with its own. First was Baigujing, because like, of course it was. But the second offering was Chiyou, who is the grandson of the Flame Emperor and absolutely not a demoness.

But it's not just the fucking useless Co-Pilot answers. Websites that are presented as compilations of knowledge have been flooded with machine-generated hallucinations. A simple factual question like "Dracula word count" gives you three web pages right away that claim to answer that question. AnyCount and ReadingLength both successfully identify it as being a bit over 160k words, but AnswerFoundry thinks the book is only 95k, which off by nearly seventy thousand words.
 
I think most people agree that AI can give you inaccurate answers. I'd like to see examples of those inaccurate answers. I want to get more and more educated about AI's uses (such as instructions on navigating support websites) and failures. Here are two that I bumped into this week.

I'd gone to our vacation place without a certain cookbook, so I went to ChatGPT to ask for the New York Times Menu recipe for Laurel Rice. In the first response, it claimed to be giving me a recipe from the New York Times cookbook (not Menu... important), but there were no bay leaves. That's why it's named "laurel."

In the second it added bay leaves, apologized for not referencing the NYTM cookbook, and gave me a recipe that included cream.

In the final try it seemed to give me the recipe I recognized.

****************
I'm reading a 997 page mystery, and I couldn't remember why a certain suspect had been rejected. I never did get a satisfactory response. It's replies contained weird combinations of specifics (it knew the names of the characters) and generalizations, "physical characteristics didn't match" was an example. I was left unsatisfied.
There is no, and never has been, a menu/recipe for 'Laurel Rice' in the New York Times Cook Book or its successor.
 
I remember when people first started to take note of Gen AI a couple of years ago. I was testing out a few of the different ChatBots using Star Trek trivia as what I felt would be a relatively well-covered topic on the Internet. After puzzling out why the Bot didn't list Spock first amongst my requested list of notable Vulcans - he's only half-Vulcan (duh!) and so not worth mentioning - I asked the bot to 'List each Star Trek Captain along with their favourite drink'

The bot had no problem identifying Romulan Ale for Kirk, Earl Grey Tea for Picard and Racktajino for Sisko.

So far, so good, but then it went on and listed a whole bunch of characters below the rank of Captain - e.g. Worf = Prune Juice.

I'll give you a moment to see if you can work out what the most logical or canon answer for Jadzia Dax was...



The really funny thing was it earnestly providing a source from one of the danker parts of the fandom along with its answer.
 
Back
Top