Can you sue ChatGPT?

It's been out nearly 5 months, what took them so long?

/s
I guess, the first identified dodgy collation of data in Australia. That's the problem with this version of AI. There is zero intelligence, zero understanding of what the words actually mean in a human sense. It's not AI at all, it's just a photocopier with a pair of scissors, cutting and collecting sentences together.

The owners/promoters of ChatGPT have a very real responsibility (moral and legal) with regards to their "product". I hope this suit (if it can ever proceed, which might be doubtful) sends a very strong message to the AI owners. Take responsibility for your toy.
 
Take responsibility for your toy.
I completely agree, and AI is a controversial subject.

However, ChatGPT is basically, in essence, an interactive Google search engine - by which I mean, its sources are the internet, and by derivation, human behaviours.

What does it say about us as a species when AI bots start stalking us, reflecting our violent culture, and sexually harassing us?

The problem with AI is it learns from all the wrong inputs - us.

Skynet had a point.
 
However, ChatGPT is basically, in essence, an interactive Google search engine - by which I mean, its sources are the internet, and by derivation, human behaviours.

That's the misunderstanding in the nutshell here, ChatGPT is very much not a search engine. It's training data are capped to a some point in 2021, and there's no new input since. It doesn't search for anything else than a word that's statistically likely to be appended to the current text, with no deeper understanding of context.
 
by which I mean, its sources are the internet, and by derivation, human behaviours.

That's the misunderstanding in the nutshell here, ChatGPT is very much not a search engine. It's training data are capped to a some point in 2021, and there's no new input since. It doesn't search for anything else than a word that's statistically likely to be appended to the current text, with no deeper understanding of context.
I was simplifying. The key point, and relevance to this discussion, being its sources.

ChatGPT's material came from 'a massive corpus of data written by humans. That includes books, articles, and other documents across all different topics, styles, and genres—and an unbelievable amount of content scraped from the open internet.'

So while it may not technically be a 'search engine', it has all the data that a search engine has. It's just already searched it.
 
Seems like a misguided claim to me, although I wish the article were more precise about exactly what the defamation was and the circumstances under which ChatGPT generated the false information about the mayor. I don't see how there is any human agency behind the information about the mayor. Plus, while ChatGPT may use publicly available sources of information to generate content, it's not principally an information source itself (that's my understanding -- do people actually use it to gather information, and is that considered reasonable? My understanding is its capacity for error is well known). It's not Wikipedia or Alexa. It's plainly unreasonable for a person to rely upon it for this purpose, so a suit makes no sense to me. But I don't know Australian defamation law.

I didn't see a specific disclaimer of accuracy in OpenAI's Terms of Service, but I'm not that familiar with this service and perhaps it's elsewhere. Such a disclaimer would be a very good idea if it's not in the terms yet.

In the USA, this lawsuit would never fly, because the plaintiff is a sitting mayor, and under NY Times v. Sullivan if the plaintiff is a public figure like a politician he must prove that the defendant acted with actual malice, and that would be impossible to prove in this case. The UK does not, to my recollection, have an "actual malice" defense, and I assume Australia doesn't either.
 
Under Bing Chats FAQs.

“Are Bing’s AI-generated responses always factual?


Bing aims to base all its responses on reliable sources, but AI can make mistakes, and third-party content on the internet may not always be accurate or reliable. Bing will sometimes misrepresent the information it finds, and you may see responses that sound convincing but are incomplete, inaccurate or inappropriate. Use your own judgement and double-check the facts before making decisions or taking action based on Bing’s responses.
To share site feedback or report a concern, open the menu at the top right corner of a response, and then click the flag icon. You can also use the feedback button at the bottom right of every Bing page.”

GHatGPT provides links to the sites it’s scraped. What did they say? What legal effect would it have if they told a different story, after all, it is a search engine to provide you with links, and links those which it acknowledges it may summarise incorrectly.

Very unlikely to be a ‘get out of jail free’ card, but it’s the only additional matter to be considered in suing internet site owners who may be anonymous or outside the jurisdiction. If the defendant and/or his assets are in the jurisdiction there’s something to go against. Otherwise, can the litigant sue in the Defendant's home jurisdiction? The only leverage always available to a state is to block the website within its jurisdiction, an order to do so may be available to a litigant, but that’s a matter of domestic jurisdiction.

I’d advise the aggrieved person to publicise the wrong done him as widely as possible on the internet, start a ‘ fund me go’ to enable the many enemies of AI to enable him to litigate and give me the money to sue for him, I’ve nothing else to do for the next 10 years.
 
From their FAQ page

https://help.openai.com/en/articles/6783457-what-is-chatgpt
Can I trust that the AI is telling me the truth?
  • ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content.

    We'd recommend checking whether responses from the model are accurate or not. If you find an answer is incorrect, please provide that feedback by using the "Thumbs Down" button.
 
I have to amend my response somewhat. I had used ChatGPT previously just to generate fiction, but I tried it out for data-gathering a few minutes ago.

I asked this question: Did O.J. Simpson commit murder?

It gave me this answer:

O.J. Simpson was acquitted of the murders of his ex-wife Nicole Brown Simpson and her friend Ron Goldman in a criminal trial in 1995. However, he was found liable for their deaths in a subsequent civil trial in 1997 and was ordered to pay millions of dollars in damages to the victims' families.

The criminal trial was highly controversial and attracted significant media attention due to Simpson's status as a famous athlete and celebrity. The prosecution argued that Simpson had a motive to kill his ex-wife and had evidence linking him to the crime scene. However, the defense team raised doubts about the prosecution's evidence and the reliability of key witnesses.

While the criminal trial resulted in an acquittal, opinions about Simpson's guilt or innocence remain divided among the public and legal experts. It is worth noting that Simpson cannot be retried for the same crimes due to the double jeopardy principle, which prohibits a person from being tried twice for the same offense.


So I suppose there are a number of questions at issue here:

1. Is it reasonable under all the circumstances for a person to rely upon ChatGPT as an information source, given what seems like a very high risk of error?
2. Does Open AI have some minimal duty to keep it accurate?
3. If there is a duty, is it taking commercially reasonable steps to maintain its accuracy?

This will be very interesting to follow.
 
Prompt:

Is ChatGPT being sued in Australia?

Response:

Yes, an Australian mayor named Brian Hood has threatened to sue OpenAI if it does not correct ChatGPT’s false claims that he had served time in prison for bribery123.

Received message. Yes, an Australian mayor named Brian Hood has threatened to sue OpenAI if it does not correct ChatGPT’s false claims that he had served time in prison for bribery .

Learn more:

1. bing.com2. moneycontrol.com3. msn.com4. msn.com5. silicon.co.uk6. silicon.co.uk+2 more
 
Prompt:

Is ChatGPT being sued in Australia?

Response:

Yes, an Australian mayor named Brian Hood has threatened to sue OpenAI if it does not correct ChatGPT’s false claims that he had served time in prison for bribery123.

Received message. Yes, an Australian mayor named Brian Hood has threatened to sue OpenAI if it does not correct ChatGPT’s false claims that he had served time in prison for bribery .

Learn more:

1. bing.com2. moneycontrol.com3. msn.com4. msn.com5. silicon.co.uk6. silicon.co.uk+2 more
That's interesting, because it answers "yes" but the true answer is no, it's not being sued yet but there is a threat of a suit. So even this small sample size shows it's prone to error.
 
1. Is it reasonable under all the circumstances for a person to rely upon ChatGPT as an information source, given what seems like a very high risk of error?
No, but they will anyway.
2. Does Open AI have some minimal duty to keep it accurate?
Doubtful. All they need to do is say 'We took some sources from the internet. So, like, it's about as accurate as the internet (not very).'
3. If there is a duty, is it taking commercially reasonable steps to maintain its accuracy?
Highly doubtful, especially as the data is historic.

Armchair expertise at your service :)
 
GHatGPT provides links to the sites it’s scraped. What did they say? What legal effect would it have if they told a different story, after all, it is a search engine to provide you with links, and links those which it acknowledges it may summarise incorrectly.
In which case it's not worth a pinch of shit, and its owners are morally bankrupt pretending it's got any worth at all, telling the public it's the next best thing in search engines. It's being spruiked as a foundation of truth, whereas in fact it's worthless. Clearly, you can't trust anything it regurgitates, what is true, what is not.

The problem as I see it, is that the burden of proof falls on the aggrieved, and the AI owners just say, "Well, I told you it might be garbage." Not good enough, stinking rich AI snake oil salesmen. Put your money where your machine's mouth is, and take responsibility for the malfeasance that comes from your product.
 
That's interesting, because it answers "yes" but the true answer is no, it's not being sued yet but there is a threat of a suit. So even this small sample size shows it's prone to error.
'Yes, an Australian mayor named Brian Hood has threatened to sue -'
It'd be an interesting topic to discuss on another occasion, but whose understanding of 'natural language' in this particular instance, is better, yours or ChatGPT's.

There's a whole new subject of 'prompt engineering' developing that's devoted to crafting prompts which can't easily be confusing to Chat Bots. Their 'natural language' is not the same thing as ours.
 
As some of you have pointed out, these AI systems are effectively doing an Internet search for their opinions and facts they use when "chatting" with users.

These increasing examples of blatantly false "facts" they are relaying is the proof (intelligent) people have needed.

For YEARS, some people have been trying to point out how the popular media and the Internet are not spreading "news" but rather spreading lies. And yet, even when confronted with independent evidence, many people continue to maintain their first impressions based on the false input.

On a scarier note, if the AI is showing us that the preponderance of the Internet is wrong on more and more points, it's an objective gauge showing us the decline in human intelligence!

Question to ChatGPT: "How intelligent are you?"
Answer is probably: "I'm only as smart as the average Internet using human."
 
There's a whole new subject of 'prompt engineering' developing that's devoted to crafting prompts which can't easily be confusing to Chat Bots. Their 'natural language' is not the same thing as ours.
Worth big bucks too, apparently, the art of asking the "right" question. All seems a bit pointless, if you still don't know if the answer is correct or not.
 
Q: How accurate are your answers?
CGPT: ...it is important to note that my responses are generated based on patterns and associations that I have learned from large amounts of text data, and there may be instances where my answers are not completely accurate or may not fully capture the intended meaning of a question...it is important to approach my answers with critical thinking and to verify information through additional sources when necessary.

And the law suit dies there. Next?
 
"Defamation is a false and unprivileged statement of fact that is made intentionally or negatively about a person to a third party in public.

It would depend on how someone phrases their question. If you ask the AI "Was that politician guilty of a crime?" If the AI replies "Yes" with no further context when the truth is the politician was not, then the AI defamed that politician. The AI is not another human with their opinions. It's a tool. And a broken tool needs to be replaced.

If you cut a few inches off a tape measure and insist your workers use that tape when measuring boards to cut, your boards will usually be cut to the wrong length. The new homebuyer who hired you to build their home would rightfully sue you for using defective tools.
 
Last edited:
It would depend on how someone phrases their question. If you ask the AI "Was that politician guilty of a crime?" If the AI replies "Yes" with no further context when the truth is the politician was not, then the AI defamed that politician. The AI is not another human with their opinions. It's a tool. And a broken tool needs to be replaced.

Screenshot 2023-04-06 152635.png


Pretty bland.
 
Another reason this probably wouldn't be an issue in the USA is section 230 of the Communications Decency Act, which provides that the provider of an interactive computer service shall not be treated as a publisher of information provided by a third party content provider. It's likely that OpenAI would fall within the scope of section 230 and be immune to liability.
 
Another reason this probably wouldn't be an issue in the USA is section 230 of the Communications Decency Act, which provides that the provider of an interactive computer service shall not be treated as a publisher of information provided by a third party content provider. It's likely that OpenAI would fall within the scope of section 230 and be immune to liability.

I expect this is going to be the case across most of the English speaking world, and probably beyond - it's why nobody's got away with successfully suing Facebook and the like for content. But that, or course, doesn't mean that if you or I get ChatGPT (or any of the other bots) to write us something, and we then publish it, we will also be exempt from defamation or libel actions. I expect that this is going to be a hard lesson for quite a number of people to learn.
 
I expect this is going to be the case across most of the English speaking world, and probably beyond - it's why nobody's got away with successfully suing Facebook and the like for content. But that, or course, doesn't mean that if you or I get ChatGPT (or any of the other bots) to write us something, and we then publish it, we will also be exempt from defamation or libel actions. I expect that this is going to be a hard lesson for quite a number of people to learn.

This is a good point. If you obtain information from a source like ChatGPT, which may be protected by law, and you then repost that information under your own name, you are a content provider, and not merely an interactive computer service, and you are therefore potentially liable as a publisher/speaker.
 
Back
Top