AI writing is really good

astuffedshirt_perv

Literotica Guru
Joined
Jun 22, 2002
Posts
1,405
The Atlantic just published an article "AI Cheating Is Getting Worse--Colleges still don’t have a plan."
"Teachers...wonder if they are grading students or computers"
"Generative AI has "Pretty much ruined the integrity of online classes..."

TLDR: Not only is GPT really good, too good to detect, but also so easy nearly all college students are using it. It is driving English professors into despair.
 
The Atlantic just published an article "AI Cheating Is Getting Worse--Colleges still don’t have a plan."
"Teachers...wonder if they are grading students or computers"
"Generative AI has "Pretty much ruined the integrity of online classes..."

TLDR: Not only is GPT really good, too good to detect, but also so easy nearly all college students are using it. It is driving English professors into despair.
There's a very simple way to deal with generative AI in Universities.

It's called exams, and it's worked well for centuries.
 
I'd be more concerned about what this means for us writers getting our stories falsely rejected for being AI.

I suspect that ship will have sailed, eventually.

Unless something stops AI development, it will eventually be good enough to mimic even good human writers. It's a genie that will not want to be bottled, so it'll find a way to keep improving.

I doubt this is stoppable. We are, it seems, as stupid as I always assumed we were.
 
I think it's not so much that AI writing is good, as that the median student essay in college is... not good, shall we say
That's being rather generous.

But as Wanda pointed out, the answer is actually quite simple. Unfortunately, it requires more work on the part of the Professors so it is likely a non-starter.

Some of the old geezers I work with have told tales of blue exam books they took to class. Then wrote out their essays under the watchful eye of a Professor, or more often a Grad Assistant. So there is some precedent for this.
 
That's being rather generous.

But as Wanda pointed out, the answer is actually quite simple. Unfortunately, it requires more work on the part of the Professors so it is likely a non-starter.

Some of the old geezers I work with have told tales of blue exam books they took to class. Then wrote out their essays under the watchful eye of a Professor, or more often a Grad Assistant. So there is some precedent for this.

I'm no geezer, and yet every one of my exams was taken in a blue book. We had to buy our own, too.

The issue is that it's about 200% easier to manage classes by having students do things digitally, instead of on paper. In this case, the path of least resistance for the professors is also the one that encourages AI use by students.

In addition, most college instructors who don't have tenure rely on people signing up for their classes for their continued employment. If word spreads among students that Dr Smith makes it near-impossible for kids to cheat using AI, then many of those kids simply won't take Dr Smith's class. And that means Dr Smith has no job.
 
I'm no geezer, and yet every one of my exams was taken in a blue book. We had to buy our own, too.

The issue is that it's about 200% easier to manage classes by having students do things digitally, instead of on paper. In this case, the path of least resistance for the professors is also the one that encourages AI use by students.

In addition, most college instructors who don't have tenure rely on people signing up for their classes for their continued employment. If word spreads among students that Dr Smith makes it near-impossible for kids to cheat using AI, then many of those kids simply won't take Dr Smith's class. And that means Dr Smith has no job.
I remember Dr. Smith from Lost in Space. Him not having a job would be a good thing. :)
 
I don't know what to tell you. It's here and it's not going away and we need to learn how to live with it. Businesses are going to use and abuse this technology like crazy; cutting the creator out of the expense column has been a goal of every industry ever.

Look for ways to use the tool. Maybe you create an ethically-sourced LLM that doesn't infringe upon the copyrights of creators. Maybe you create an ethically-sourced generative AI that doesn't use copyrighted images. Maybe you use AI to flesh out your plotlines before you write your final draft. Maybe you use it to edit your work. Maybe you use it to take an existing story and code it into a choose-your-own-adventure format. And we'll have to adapt to the capabilities it provides to do deep fakes, cheat on academics, and etc...

Hiding our heads in the sand has *never* worked and is a waste of effort. Embrace the change and look for ways to ethically employ the new capabilities that have been handed over to us.
 
I'm no geezer, and yet every one of my exams was taken in a blue book. We had to buy our own, too.

The issue is that it's about 200% easier to manage classes by having students do things digitally, instead of on paper. In this case, the path of least resistance for the professors is also the one that encourages AI use by students.

In addition, most college instructors who don't have tenure rely on people signing up for their classes for their continued employment. If word spreads among students that Dr Smith makes it near-impossible for kids to cheat using AI, then many of those kids simply won't take Dr Smith's class. And that means Dr Smith has no job.
Well, I already addressed the "more work for the Prof" argument.

As for people avoiding a Prof's class. That is an easy one. Make it a Department or University policy. If everyone is doing it then students are complaining about the whole department and it all balances out.
It doesn't make sense for a single Prof to try and stop it. It's a systemic problem and needs to be addressed as one.
 
As for people avoiding a Prof's class. That is an easy one. Make it a Department or University policy. If everyone is doing it then students are complaining about the whole department and it all balances out.
It doesn't make sense for a single Prof to try and stop it. It's a systemic problem and needs to be addressed as one.

That simply creates the same problem at a different level. How long before students merely avoid that department, or enroll at some other university?

Remember, higher education in the USA is a money game, not an education game. And it's also a competitive environment with a free market. This puts universities in an awkward position on issues like this: they have to make money to stay viable and attract professors. They have to keep bringing in students to maintain their ranking in US News And World Report, or whoever is grouping colleges this week. They have to stay competitive, in a word.

Consciously rejecting new ideas is not something that's likely to sit well, either with students or with younger and less hidebound professors. Even the younger people I'm working with don't understand why AI is bad. A whole generation is well on its way to thinking AI is bitchin'.

I don't have a counter that I think will work. I think we're doomed, and this is just example #34,162.
 
That simply creates the same problem at a different level. How long before students merely avoid that department, or enroll at some other university?

Remember, higher education in the USA is a money game, not an education game. And it's also a competitive environment with a free market. This puts universities in an awkward position on issues like this: they have to make money to stay viable and attract professors. They have to keep bringing in students to maintain their ranking in US News And World Report, or whoever is grouping colleges this week. They have to stay competitive, in a word.

Consciously rejecting new ideas is not something that's likely to sit well, either with students or with younger and less hidebound professors. Even the younger people I'm working with don't understand why AI is bad. A whole generation is well on its way to thinking AI is bitchin'.

I don't have a counter that I think will work. I think we're doomed, and this is just example #34,162.

Because there will be significant reputational damage to the ones that don't. One of the reasons the Ivy League can charge a premium for their diploma is because they purport to have a better product.
Perhaps it is just the Marketing Major in me, but there seems to be an advantage to marketing yourself as a school for students who don't want to cheat.
Employers respond to that kind of thing.
Remember, one reason college took off in the first place is because employers had to use it as a signaling device after Griggs v Duke Power. They couldn't test your intelligence but a college could.
 
Because there will be significant reputational damage to the ones that don't. One of the reasons the Ivy League can charge a premium for their diploma is because they purport to have a better product.
Perhaps it is just the Marketing Major in me, but there seems to be an advantage to marketing yourself as a school for students who don't want to cheat.
Employers respond to that kind of thing.
Remember, one reason college took off in the first place is because employers had to use it as a signaling device after Griggs v Duke Power. They couldn't test your intelligence but a college could.

I hope you're right.

I am not optimistic.
 
The Atlantic just published an article "AI Cheating Is Getting Worse--Colleges still don’t have a plan."
"Teachers...wonder if they are grading students or computers"
"Generative AI has "Pretty much ruined the integrity of online classes..."

TLDR: Not only is GPT really good, too good to detect, but also so easy nearly all college students are using it. It is driving English professors into despair.
I teach, and I can promise you that AI generated text is (very often, though not always, granted) all too easy to spot. Does that essay look too good? It used to be that we would run random sentences through Google to catch the plagiarists (there are always a few who think we won't notice - my favourite was a colleague who had a student plagiarise her own work and then swear blind that they hadn't), and now that has largely switched to ChatGPT. The solution is, as Onehitwanda points out, very simple and I use it all the time - written work undertaken during classes. I never used to do that as I regarded as a waste of contact time. Now, however, it is essential.

And every semester a student complains and asks aloud why we are bothering as LLM's are going to be doing so much of the heavy lifting in the workplace in future, and my answer is simple - your future employer needs to know that you know enough to catch out the LLMs when they hallucinate. And that is at least in part what we are doing in higher education: ensuring that the students know enough to do the job. The ones who rely too heavily on AI tools will get caught out down the road, and I love pointing out those workers who have already lost out through bad AI use.
 
That simply creates the same problem at a different level. How long before students merely avoid that department, or enroll at some other university?
Sounds like what's left will be the cream of the crop. In the end, the best universities will be defined by people who use their brains rather than their computers.
 
Interesting head to head that went about as I expected

https://www.nytimes.com/2024/08/20/opinion/beach-read-ai.html
To summarize the article, the GPT wrote a very boring 1000 word story in about 17 seconds. BUT: (and it's a big BUT) the author spent ~2 weeks on his, wrote a ~2000 word story and then edited it significantly. Also, this apparently just used the off-the-shelf standard ChatGPT
Now, the question at hand I would think is could she run GPT, then edit the story to create something of quality? If you view GPT as a first draft maker...
 
Maybe I'm the lone bird here that doesn't agree that AI is good. It often has logicless sentences and happenings. It repeats phrases, overuses the physical attributes of characters, has unimportant characters appear gives them names, and have meaningful moments, and then vanish, never to be mentioned again.
 
To summarize the article, the GPT wrote a very boring 1000 word story in about 17 seconds. BUT: (and it's a big BUT) the author spent ~2 weeks on his, wrote a ~2000 word story and then edited it significantly. Also, this apparently just used the off-the-shelf standard ChatGPT

...two weeks on a 2k word story? Ummmmm...

Now, the question at hand I would think is could she run GPT, then edit the story to create something of quality? If you view GPT as a first draft maker...

I'm not a professional journalist, but I'd imagine that field tends to attract people for whom writing is generally pretty effortless (maybe not, actually, if it takes them 2 weeks to generate 2k words...). I'd guess the first draft would be something they could crank out more or less automatically. I write a lot at my work, and to be honest it's probably faster (and obviously much better) for me to simply write something myself and know it's right, rather than having to devise the right AI prompt and then parse out the resulting composition to make sure it's any good.
 
......Hiding our heads in the sand has *never* worked and is a waste of effort. Embrace the change and look for ways to ethically employ the new capabilities that have been handed over to us.

This is the right attitude. Back when I was in high school, electronic calculators were just becoming affordable to the average person. Teachers were UP IN ARMS about not letting students use calculators in class. They aren't really learning math! They are just pushing buttons!! Nowadays we realize that calculators are just tools that allow humans to use their own internal abilities more efficiently than without them. Maybe we aren't as good at multiplication tables as we used to be, but we get the work done a lot quicker.

AI will end up being the same: just a tool to let a competent person do a task more efficiently. Universities will need to accept that students are going to use it, and make that part of the "new normal." Just teach them how to use them in the best way possible.
 
At this point, weather or not universities accept the use of AI for writing, this site doesn't.
This is the right attitude. Back when I was in high school, electronic calculators were just becoming affordable to the average person. Teachers were UP IN ARMS about not letting students use calculators in class. They aren't really learning math! They are just pushing buttons!! Nowadays we realize that calculators are just tools that allow humans to use their own internal abilities more efficiently than without them. Maybe we aren't as good at multiplication tables as we used to be, but we get the work done a lot quicker.

AI will end up being the same: just a tool to let a competent person do a task more efficiently. Universities will need to accept that students are going to use it, and make that part of the "new normal." Just teach them how to use them in the best way possible.
 
There's a very simple way to deal with generative AI in Universities.

It's called exams, and it's worked well for centuries.
It used to be that frat houses (maybe sororities too?) kept old term papers on file in case some member wanted to re-use one. My college didn't have fraternities, but I heard that it was fairly common. Most college term papers are kind of bland anyway. I wrote many a day or two before they were due, so they were not examples of sparkling writing.
 
This is the right attitude. Back when I was in high school, electronic calculators were just becoming affordable to the average person. Teachers were UP IN ARMS about not letting students use calculators in class. They aren't really learning math! They are just pushing buttons!! Nowadays we realize that calculators are just tools that allow humans to use their own internal abilities more efficiently than without them. Maybe we aren't as good at multiplication tables as we used to be, but we get the work done a lot quicker.

AI will end up being the same: just a tool to let a competent person do a task more efficiently. Universities will need to accept that students are going to use it, and make that part of the "new normal." Just teach them how to use them in the best way possible.
How much data do you have to feed in to get a result? (That applies to non-fiction.) Is that any actual research involved? Or do you just put in "Battle of the Somme" and something comes out?
 
Back
Top