AI gets confused over simple edits

Joined
Nov 28, 2018
Posts
28
First it says my piece is human-written:
1743658853173.png
Then, I make a few small edits like fixing some punctuation and adding a bit more detail now it's claiming it's AI-generated!1743658943347.png
 

Attachments

  • 1743659275078.png
    1743659275078.png
    101.5 KB · Views: 21
Are you feeding this into one of the "AI detector" sites out there, or are you just pasting it into a chat AI and asking how it sounds?
 
The fact that people haven't realised that none of these 'AI' chatbots do what they claim to do at this point is astonishing
It's not astonishing if you have a sufficiently cynical opinion about non-artificial (human) intelligence. ;)
Garbage in, garbage out, as they say.
 
It's not astonishing if you have a sufficiently cynical opinion about non-artificial (human) intelligence. ;)
Garbage in, garbage out, as they say.
The issue is more pronounced with fiction than non-fiction. On some sites, I can recognize what I think is AI-generated text. It tends to be too long for the subject matter and sounds somewhat generic. However, if I had never heard of AI, I would just think it was dull writing but by a human. AI-generated photos are usually easily spotted because they are obviously a mash-up of different images.

Yet AI on-line has developed from non-existent a few years ago to a point beyond which I thought would be possible. (Sort of like how those self-driving taxis by Waymo are getting better and better.) I wonder where we'll be ten years from now.
 
Yet AI on-line has developed from non-existent a few years ago to a point beyond which I thought would be possible. (Sort of like how those self-driving taxis by Waymo are getting better and better.) I wonder where we'll be ten years from now.
Given that OpenAI recently said they're spending $2 for every $1 they earn, and that trend appears to be repeated across the industry, I think the bottom will have dropped out of this thing long before then. The VCs will cash out, all of these 'AI' services will collapse, and all we'll have to show for it will be a dead internet and a class of creatives no longer able to financially support themselves because the value of their labour has been devalued.
 
Given that OpenAI recently said they're spending $2 for every $1 they earn, and that trend appears to be repeated across the industry, I think the bottom will have dropped out of this thing long before then. The VCs will cash out, all of these 'AI' services will collapse, and all we'll have to show for it will be a dead internet and a class of creatives no longer able to financially support themselves because the value of their labour has been devalued.
Please please please
 
The issue is more pronounced with fiction than non-fiction. On some sites, I can recognize what I think is AI-generated text. It tends to be too long for the subject matter and sounds somewhat generic. However, if I had never heard of AI, I would just think it was dull writing but by a human.
I've noticed it most prominently on estate agent listings for houses for sale. They seem to feed in details and get the AI to write the listing, but they don't check for errors. So it ends up being too long (they think more words are better); it sounds completely generic; and there are minor problems that it feels like a human wouldn't make.
 
Given that OpenAI recently said they're spending $2 for every $1 they earn, and that trend appears to be repeated across the industry, I think the bottom will have dropped out of this thing long before then. The VCs will cash out, all of these 'AI' services will collapse, and all we'll have to show for it will be a dead internet and a class of creatives no longer able to financially support themselves because the value of their labour has been devalued.
I would argue that human-made art will increase in value, instead.

The main thought anyone has when seeing something was generated by AI is cheap and vapid. Because AI is all about mashing up averages, no creative risks. Unless the tech bros figure out how to make AI understand what it learns, it'll only get duller and duller.
This can be useful in certain domains, but not in creative ones, and especially not in fiction.

There have already been quite a few scandals with publishing companies opting for AI cover art and both readers and authors being very unhappy about it, to the point where the offending books had to be pulled from the market.

What is likely to happen, is that publishing companies will suffer, if they don't keep clean of AI, and indie publishing will get more attention. Also, publishers will have to be brave. AI can write for broad audiences, hit all the genre standards that are pushed right now to make as many readers happy as possible... and people will start associating broad audience writing with AI, which they increasingly see as cheap and boring, which will lead to less sales.

I'd say hold on for maybe a year more, until the novelty becomes mundane and the bubble pops... it's already deflating...

[edit on topic]
These AI identifier engines are quite unreliable. I'm studying in IT and our professors seem to have given up on these engines entirely and reverted to the good old plagiarism checker. When we asked about referencing GPT they raised their shoulders at us and basically said 'make a pdf with the conversation and annex it, if you must' šŸ˜…

I recall some companies filtering their job applicants based on some words like 'passionate' in their cv/letter, but somebody must've realized how dumb that is, considering how they expect a certain decorum in those texts, so 'hey, bro, just give me that job, thanks' won't do either.
 
Last edited:
What OpenAI says and does doesn't have to be the same. And some of the AI stuff has improved. There is a fiction writing site that an author friend uses and claims she doesn't have to edit it much to make "HUMAN." The good thing, in my mind, is that she isn't a very good writer herself, so she isn't the best judge. If she uses whatever it is (which she didn't tell me the site's name), her writing has improved slightly. It's shocking to me that AI might as well as a bad writer.
Given that OpenAI recently said they're spending $2 for every $1 they earn, and that trend appears to be repeated across the industry, I think the bottom will have dropped out of this thing long before then. The VCs will cash out, all of these 'AI' services will collapse, and all we'll have to show for it will be a dead internet and a class of creatives no longer able to financially support themselves because the value of their labour has been devalued.
 
I've noticed it most prominently on estate agent listings for houses for sale. They seem to feed in details and get the AI to write the listing, but they don't check for errors. So it ends up being too long (they think more words are better); it sounds completely generic; and there are minor problems that it feels like a human wouldn't make.
Real estate listings are one place where it might take hold. The buyers and sellers of real estate probably don't care if it sounds generic. The length is an issue, but I'd guess that is something that can be tweaked in the programming.
 
All things human will eventually increase in value, including face to face interaction.
Do you mean human creativity, or anything that humans have a hand in making? It seems that some things do fall in value and are not coming back soon (unless the government subsidizes something, like passenger trains). Face to face interaction: still seems to be falling in "value," however that is measured. There is a huge quantity of interactions that can now be done with a smartphone. And arguably Amazon and such are hitting normal retail operations hard. (Macy's is closing about 150 stores in the next two years.)

Overall, one thing that isn't worth much is predictions.
 
I predict we shan't see the end of the AI discussion in the near future!
Do you mean human creativity, or anything that humans have a hand in making? It seems that some things do fall in value and are not coming back soon (unless the government subsidizes something, like passenger trains). Face to face interaction: still seems to be falling in "value," however that is measured. There is a huge quantity of interactions that can now be done with a smartphone. And arguably Amazon and such are hitting normal retail operations hard. (Macy's is closing about 150 stores in the next two years.)

Overall, one thing that isn't worth much is predictions.
 
I predict we shan't see the end of the AI discussion in the near future!
Well, this is only the 15th post on this thread. Most of the other threads have been writers complaining about AI rejections and how to avoid them. Yet on a couple of other sites of this type, the issue doesn't seem to come up very often. Why only on Lit? The management here seems to have become - concerned? panicked? - and the fear is now reflected back by us.
 
They're coming to get you @gunhilltrain.
Well, this is only the 15th post on this thread. Most of the other threads have been writers complaining about AI rejections and how to avoid them. Yet on a couple of other sites of this type, the issue doesn't seem to come up very often. Why only on Lit? The management here seems to have become - concerned? panicked? - and the fear is now reflected back by us.
 
Just recycled that joke from another AI thread. I'm starting to be as repetitive as AI. Maybe, I'm MillieAI!
 
They're coming to get you @gunhilltrain.
I'm going to be seventy next month, so I don't think I care. I'm trying to understand this:

"The first forty years of life give us the text; the next thirty supply the commentary on it."
Arthur Schopenhauer

Since I've been through all of that time, now what?
 
The cat is out of the bag.

Pandora’s box is open.

The toothpaste is out of the tube.

The genie is out of the bottle.


AI may be expensive for developers but it is economically viable for many businesses. Also, with the way companies can fold without holding individuals responsible there are many ways for managers to profit from failed businesses.

Also, since AI provides many military advantages it will continue to be developed throughout the world even if it isn’t profitable.



Only a few years ago I was discussing AI on an energy forum when a guy insisted that self-driving cars were only a fantasy. I’d already see one and told him so but he didn’t want to hear it anymore that authors want to hear that AI is still in its infancy.
 
I would argue that human-made art will increase in value, instead.

The main thought anyone has when seeing something was generated by AI is cheap and vapid. Because AI is all about mashing up averages, no creative risks. Unless the tech bros figure out how to make AI understand what it learns, it'll only get duller and duller.
This can be useful in certain domains, but not in creative ones, and especially not in fiction.

There have already been quite a few scandals with publishing companies opting for AI cover art and both readers and authors being very unhappy about it, to the point where the offending books had to be pulled from the market.

What is likely to happen, is that publishing companies will suffer, if they don't keep clean of AI, and indie publishing will get more attention. Also, publishers will have to be brave. AI can write for broad audiences, hit all the genre standards that are pushed right now to make as many readers happy as possible... and people will start associating broad audience writing with AI, which they increasingly see as cheap and boring, which will lead to less sales.

I'd say hold on for maybe a year more, until the novelty becomes mundane and the bubble pops... it's already deflating...

[edit on topic]
These AI identifier engines are quite unreliable. I'm studying in IT and our professors seem to have given up on these engines entirely and reverted to the good old plagiarism checker. When we asked about referencing GPT they raised their shoulders at us and basically said 'make a pdf with the conversation and annex it, if you must' šŸ˜…

I recall some companies filtering their job applicants based on some words like 'passionate' in their cv/letter, but somebody must've realized how dumb that is, considering how they expect a certain decorum in those texts, so 'hey, bro, just give me that job, thanks' won't do either.



What does it actually mean to ā€œunderstandā€ something?
 
Lift with the knees when you carry their water, Alex. Wouldn't want to throw out your back.

Also, just this week I learned that Pandora had a jar, not a box. Pandora's Box is the name of the jail on the Pandora, a British naval vessel sent to hunt down the mutineers from the Bounty.
 
What does it actually mean to ā€œunderstandā€ something?
That is the winning question, isn't it?

I can only point out that the biggest difference between AI and a human mind, currently, is that AI just copies and reorganizes from data it's being trained on, while a human can simply look at something happening in front of them and make some sense of it, even if they've never really gotten any information about it til that moment in time.
You can train a human to regurgitate quotes and reorganized bits of already existing information, like AI currently does, but you can't train an AI to make any sense of what an anthill is by just observing it. Because we don't really understand what understanding is, how thinking works...
 
Lift with the knees when you carry their water, Alex. Wouldn't want to throw out your back.

Also, just this week I learned that Pandora had a jar, not a box. Pandora's Box is the name of the jail on the Pandora, a British naval vessel sent to hunt down the mutineers from the Bounty.

If you’re referring to AI shouldn’t you say ā€œit’s waterā€? šŸ˜‰


But no. I’m not carrying its water. I’m watching it carry water for a variety of users, for a variety of purposes.

I have many concerns over the impact it has had and the ones to come. My number one fear is that people will underestimate it and society will suffer unforeseen consequences.

What it technically is and how it works is of less importance to society than what it does and what it is used for.


Something I’m looking for is a practical sort of Turing Test that I can use when dealing with one while on chat, tech support, or customer service. Have you been initially fooled by an AI while on the phone?

I was on a long tech support call recently, thinking I was dealing with two humans. I thought one was the regular tech and the other was a higher level engineer. Nope. It was the regular guy and his AI assistant. I’d been directly asking it questions for a while before I realized the human tech was restating some of my questions and was redirecting some of the responses. The company is actively training the AI to autonomously answer their tech service calls.

It was me recognizing how the human tech was crafting prompts that clued me in.
 
If you’re referring to AI shouldn’t you say ā€œit’s waterā€? šŸ˜‰


But no. I’m not carrying its water. I’m watching it carry water for a variety of users, for a variety of purposes.

I have many concerns over the impact it has had and the ones to come. My number one fear is that people will underestimate it and society will suffer unforeseen consequences.

What it technically is and how it works is of less importance to society than what it does and what it is used for.


Something I’m looking for is a practical sort of Turing Test that I can use when dealing with one while on chat, tech support, or customer service. Have you been initially fooled by an AI while on the phone?

I was on a long tech support call recently, thinking I was dealing with two humans. I thought one was the regular tech and the other was a higher level engineer. Nope. It was the regular guy and his AI assistant. I’d been directly asking it questions for a while before I realized the human tech was restating some of my questions and was redirecting some of the responses. The company is actively training the AI to autonomously answer their tech service calls.

It was me recognizing how the human tech was crafting prompts that clued me in.
It would be its, not it's, but I meant the people who benefit from AI.

AI is an existential threat that is outside the scope of this thread. I watched Bill Gates talking enthusiastically about how many jobs could be replaced by AI last week without once mentioning UBI. As someone who lives only slightly better than paycheck to paycheck, that's me fucked and I don't have patience for your pragmatism.

This is not the place for that conversation.
 
Back
Top