ChatGPT your way past writer's block

A lot of people play with ChatGPT the way they play with the funny face-morphers on their phone. It's fun.

But there's an aspect of it that reminds me of the increasing "why walk when you can drive" attitude. Why cook when you can get it delivered? Why get healthy when you can medicate?

The idea of making an effort is subversive these days. How dare you opt out of all those wonderful labor-saving machines, devices and services we're selling? What are you, some kind of pinko?
 
In a previous post, I quoted ChatGPT in answer to the question, "how should ChatGPT be credited?" It turns out ChatGPT's answer contradicts with OpenAI's own terms and conditions.
First case: If you copy and paste ChatGPT text, you must unambiguously credit ChatGPT with its creation.
Second case: If you copy and paste, then heavily modify ChatGPT text, then you must present a clear acknowledgement that ChatGPT was used in the creation of your work, much like you would acknowledge the contribution of an editor or beta reader. If you use ChatGPT in any other way, there is no legal requirement to credit it or OpenAI.
I would never do either of those. That's just not how I would ever use an AI, because I want the content to be mine.
 
“Of course, the Marxian definition of value is ridiculous. All the work one cares to add will not turn a mud pie into an apple tart; it remains a mud pie, value zero. By corollary, unskillful work can easily subtract value; an untalented cook can turn wholesome dough and fresh green apples, valuable already, into an inedible mess, value zero. Conversely, a great chef can fashion of those same materials a confection of greater value than a commonplace apple tart, with no more effort than an ordinary cook uses to prepare an ordinary sweet.”
― Robert A. Heinlein, Starship Troopers
 
“Of course, the Marxian definition of value is ridiculous. All the work one cares to add will not turn a mud pie into an apple tart; it remains a mud pie, value zero. By corollary, unskillful work can easily subtract value; an untalented cook can turn wholesome dough and fresh green apples, valuable already, into an inedible mess, value zero. Conversely, a great chef can fashion of those same materials a confection of greater value than a commonplace apple tart, with no more effort than an ordinary cook uses to prepare an ordinary sweet.”
― Robert A. Heinlein, Starship Troopers

On the other hand, from an author every bit as famous as Heinlein:

"There is an old song which asserts 'the best things in life are free.' Not true! Utterly false! ... Nothing of value is free. Even the breath of life is purchased at birth only through gasping effort and pain."
― Robert A. Heinlein, Starship Troopers

(There's a hidden-cost fallacy in the first quote, for those who care to look for it.)
 
The idea of making an effort is subversive these days. How dare you opt out of all those wonderful labor-saving machines, devices and services we're selling? What are you, some kind of pinko?
Sounds like another SciFi movie plot...

1675142717095.jpeg
 
One, you chopped off the other part of the quote:
“value,’ has two factors for a human being: first, what he can do with a thing, its use to him . . . and second, what he must do to get it, its cost to him. There is an old song which asserts ‘the best things in life are free.’ Not true! Utterly false! This was the tragic fallacy which brought on the decadence and collapse of the democracies of the twentieth century;”
― Robert A. Heinlein, Starship Troopers

And two:
"Allegory is always suspect" - Paraphrased from memory from same.


Though may I say how delighted I was to speak with another cultured person.
 
There may come a day when humans write worse than computers, but if that happens, we're in worse trouble than worrying about mere plagiarism.

If you take transhumanism seriously, before it gets to that, humans and computers will merge. If you can't beat them, join them.
 
Meh. Microsoft Bing is rumored to be planning to add optional ChatGPT functionality to its public search engine, but I don't think that will make people ditch Chrome. Google is aware of the danger and is investing its best resources to develop its own AI.

It's an amazing tool - to ignore it is to ignore the future.

It has the potential to be for the 2020s what blockchain was for the 2010s.
 
It has the potential to be for the 2020s what blockchain was for the 2010s.
But actually potentially useful.

Blockchain is an extremely niche tech that can be better implemented using existing mature tech. It only became a buzzword because of crypto. Without that, it's just a public ledger system.
 
Blockchain is an extremely niche tech that can be better implemented using existing mature tech. It only became a buzzword because of crypto. Without that, it's just a public ledger system.

I was deliberately ambiguous there, but that's exactly the kind of thing I was getting at.

Blockchain and GPT are both impressive feats of engineering, and I admire the cleverness that went into each of them. But that cleverness has a way of dazzling people so they forget to ask "so what are the use cases for this?"

(with the follow-up question "okay, but what are the legitimate use cases?")

A lot of businesses got into blockchain not because they had a good answer to that question, but because it was hyped very effectively and they were terrified of being left behind. That "to ignore it is to ignore the future" line is exactly the sort of thing the blockchain salesmen were saying a few years back.

Re-quoting a GPT session I recently shared in a different thread:

can GPT reliably multiply 3-digit numbers?

Yes, GPT can reliably multiply 3-digit numbers, as well as larger numbers. It has been trained on a wide range of mathematical calculations, including multiplication, and can perform these calculations accurately.

what is 123 times 567?

70041

what is 81 times 74?

6004

Both those answers are wrong. Outside of crime, things that aren't crimes but ought to be, and the fiction market - which I'm sorry to say is small potatoes in the grand scheme of things - it's not clear to me what the business case is for a computer that can't reliably multiply two-digit numbers, but can lie about being able to do so.

(I expect the multiplication issue will get patched sooner or later, because it's a particularly obvious and embarrassing one, and there's a hacky work-around that could address this particular case. But it's symptomatic of a much deeper deficit and I'm not sure if there is a systematic way to fix that deeper problem.)

I can see legitimate niche applications here and there. Take a search query, use GPT to paraphrase it and see if the paraphrased version finds hits the original version didn't - stuff like that could work. But I've been around long enough to be wary of technology hype cycles. Re. search engines specifically, I suspect any improvement it makes to the search algorithm is going to be more than offset by how easy it will be for people to use GPT-generated websites to capture search hits :-/
 
It's got onto the BBC and there's GPT Zero, that can spot plagiarised text... like medical exams questions
Not very well though: https://futurism.com/gptzero-accuracy

"We fed it a total of sixteen pieces of text, each at least 300 words in length, eight pulled from our own archives and eight generated by ChatGPT.... GPTZero correctly identified the ChatGPT text in seven out of eight attempts and the human writing six out of eight times."

Based on those error rates, if you've got a class of 40 students and 20% of them are using GPT to cheat, you'd end up with:

- 7 cheats caught
- 1 cheat not caught
- 8 honest students falsely tagged as cheats

The first two of those numbers are pretty good but the third is a nightmare.
 
Prompt: How would a student, who uses GPT to write essays, be detected?

A student who uses GPT to write essays could be detected by several methods, including:
  1. Plagiarism detection software: These tools can compare the text in an essay to a large database of existing content to determine if it's original.
  2. Linguistic analysis: Human readers or specialized software can analyze the text for patterns and characteristics that are inconsistent with typical human writing, such as lack of coherence, repeating phrases, or unnatural phrasing.
  3. Human review: Teachers and professors can also identify essays generated by GPT by looking for signs such as generic or impersonal writing, missing context or background information, or a lack of critical thinking and analysis.
It's important to note that using GPT or any other AI tool to write essays is considered academic dishonesty and can result in serious consequences.


Prompt: What are the consequences for the institution if they wrongly discipline a student for plagiarism?

The consequences for an institution if they wrongly discipline a student for plagiarism can be significant and may include:
  1. Legal action: If a student feels that their rights have been violated, they may take legal action against the institution.
  2. Damaged reputation: If a student is wrongly accused of plagiarism, it could damage the institution's reputation and potentially lead to a loss of trust from students, parents, and the wider community.
  3. Loss of funding: Depending on the severity of the situation, the institution could face a loss of funding from government or other sources if the error is not corrected and addressed in a satisfactory manner.
  4. Negative publicity: The situation could also attract negative publicity and media attention, further damaging the institution's reputation.
Therefore, it is important for institutions to have clear policies and procedures in place for dealing with allegations of plagiarism, and to take all necessary steps to ensure that they are applied fairly and accurately.

My advice: Deny,deny,deny, deny until you die. See who blinks first.
 
One of the points made about exam cheats is asking the students for their sources. ChatGPT is drawing information from web content it calculates is relevant but as yet, doesn't reveal what those sources were.
 
One of the points made about exam cheats is asking the students for their sources. ChatGPT is drawing information from web content it calculates is relevant but as yet, doesn't reveal what those sources were.

I'm sure someone is working on that.

Tight rubrics should reduce the cheating students' advantages. Provided those rubrics are self-produced and never get posted anywhere.
 
One of the points made about exam cheats is asking the students for their sources. ChatGPT is drawing information from web content it calculates is relevant but as yet, doesn't reveal what those sources were.

I suspect getting it to write a well-sourced essay is going to be difficult. It has plenty of sources in its training corpus, but it doesn't memorise all of them verbatim, and it'd be hard to adapt it to track the provenance of facts because it's not "thinking" in terms of facts at all.

OTOH, it can fake the style of a sourced piece:

"write a three-paragraph literature review on the scientific evidence for the effects of smoking, following Harvard citation style, and including all cited sources"

[intro and closing paragraphs snipped]

A meta-analysis of prospective cohort studies published in the New England Journal of Medicine found that smokers had a significantly increased risk of developing coronary heart disease, stroke, and lung cancer compared to never smokers (Peto et al., 1994). Similarly, a systematic review published in the Journal of the American Medical Association found that smokers had a two- to four-fold increased risk of developing cardiovascular disease compared to never smokers (Graveling & Lancaster, 2009). In addition, smoking has been shown to significantly increase the risk of developing several types of cancer, including lung, pancreatic, and bladder cancer (U.S. Department of Health and Human Services, 2014).

...

References:

Graveling, A., & Lancaster, T. (2009). Smoking and cardiovascular disease: a systematic review of the epidemiological evidence. Journal of the American Medical Association, 301(19), 2035-2042.

Peto, R., Lopez, A. D., Boreham, J., Thun, M., & Heath, C. (1994). Mortality from tobacco in developed countries: indirect estimation from national vital statistics. New England Journal of Medicine, 330(9), 1253-1259.

U.S. Department of Health and Human Services. (2014). The health consequences of smoking—50 years of progress: a report of the Surgeon General. Atlanta, GA: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health.

That all seems pretty reasonable.

But the Graveling & Lancaster paper doesn't exist. JAMA 301(19) does, and the date and page numbers are plausible, but no such paper is listed in the contents. Similarly for the NEJM paper. The CDC report is real; I guess that one's prominent enough that GPT has learned the cite verbatim.

On a quick search, several of the authors cited are real people who have published in smoking research. Peto, Lopez, and Lancaster all exist. I couldn't find an "A. Graveling" but there's both an "R.A. Graveling" and a "P.A. Graveling" in the field (maybe related?) - perhaps GPT got confused by them both having the same middle initial and figured that "A. Graveling" was a plausible name for this kind of content?

I'm sure someone is working on that.

Tight rubrics should reduce the cheating students' advantages. Provided those rubrics are self-produced and never get posted anywhere.

The glib answer I've seen from various people is that if GPT can pass your tests, you're testing the wrong things. I think there's some truth in that but it's not a very complete answer.

Humans don't necessarily trust other humans, but we have a lot of experience in gauging when somebody's likely to lie and what it looks like when they lie. GPT lies in ways that we're not used to, for reasons most people would have difficulty even comprehending. Eventually I expect we'll learn, but that's going to be painful and the increased distrust is going to impede how we communicate with one another.

There are plenty of GPT-proof ways to assess a student's capability but the ones I know of require a lot more time and work, often one-on-one attention from the teacher, and that means paying for more teachers, which isn't popular these days.
 
Sudowrite is created for this very purpose. For writing stories, it's far more sophisticated than ChatGPT.
 
I see some are saying this topic is being beaten into the ground already, but here are my two cents worth, anyway. ChatGPT is only going to get better and more sophisticated. Despite all the chest-pounding about “real” writers being the only worthy ones (and I agree whole-heartedly), without a doubt more and more stories are going to appear here using it. Maybe whole stories in just a short while as the technology “improves.” Here’s the scary thought: What will it mean when ChatGPT stories get more readers, higher ratings, and more positive comments than real authors get for theirs? Maybe it won’t mean a thing. I hope it will mean at least a little something. For an interesting point of reference, I suggest checking out The Twilight Zone episode called “Steel,” about machines replacing boxers, especially Rod Serling’s comments at the end.
 
Dude, that link is going to get blown way out of the water as soon as a mod gets here. And that'll happen three times, or however many you post it.
 
Back
Top