NY Times Article Talks About AI Writing Style

Humor. Directness. Subtlety. This article has them in spades. It's not just great journalism, it's a thriving tapestry of words. Maybe the author takes time making their point -- maybe the point was to delve in and enjoy the journey we took along the way.

Now the question you've got to ask yourself is: did Areala-chan write the above herself, or did she ask an LLM to do it for her? ;)
 
Ha ha. Supposedly only the first paragraph of the article was AI. Your use of "tapestry" quoted the article. So no, not AI. Unless you told it to use "tapestry" in your directive.

Hopefully. :p
 
Good article overall, although I'd quibble with this part:

Within the A.I.’s training data, the em dash is more likely to appear in texts that have been marked as well-formed, high-quality prose. A.I. works by statistics. If this punctuation mark appears with increased frequency in high-quality writing, then one way to produce your own high-quality writing is to absolutely drench it with the punctuation mark in question. So now, no matter where it’s coming from or why, millions of people recognize the em dash as a sign of zero-effort, low-quality algorithmic slop.

The technical term for this is “overfitting,” and it’s something A.I. does a lot.

This isn't overfitting. It's a correlation-vs-causation error. I guess one could say that overfitting is a subtype of correlation/causation error, but not all correlation/causation errors are caused by overfitting, and this doesn't seem like one.

Example of overfitting: Mr. Smith, Mr. Jones, Mr. Brown and Mr. Pink go on holiday together. Mr. Smith and Mr. Jones both get food poisoning, Mr. Brown and Mr. Pink do not. From this we conclude that people with an "S" in their surname are more susceptible to food poisoning.

This rule accurately describes the data we've observed: all the people with an "S" in their surname got food poisoning, all the people without one didn't. But it would be a mistake to extrapolate from it, because four people isn't much data. It's very likely that if we looked at more holidaymakers, that correlation between the "S" and food poisoning would vanish. The "pattern" we've observed exists only in this particular data set.

Example of correlation vs. causation error (w/o overfitting): we observe that in a large city, over a whole year, the death rate for people in hospital is 20x higher than that for people outside hospital. We conclude that hospitals cause deaths and we should demolish them to save lives.

In this case, we have plenty of data, and the "pattern" we've observed is a real one. If we looked at the same city next year, or a similar city elsewhere, we'd probably see something similar to that 20:1 ratio. Here the error lies in our interpretation of the pattern, assuming that deaths are caused by hospitals, rather than that sick people often go to hospital to die.

Overfitting is a common error in ML, but there are well-known techniques for detecting and avoiding it (e.g. TTV splitting) and I'd be very surprised if OpenAI/etc. weren't using them. The "hospitals cause deaths"/"em-dashes make good prose" error is a much harder one to mitigate.
 
Well, this is sobering!!! Just as I finished the first paragraph, I highlighted it to copy here and remark that it was the best description of "AI writing" I had seen yet. Then, in the next sentence (first of next para), I'm told that I should have felt uneasy. That "tapestry" should have put me on alert. Oh, dear. What shall I do? I clearly have no ability to detect non-humans.
 
I'm told that I should have felt uneasy. That "tapestry" should have put me on alert.

Too many metaphors is the underlying issue. People don't write like that. Well, at least most people. There are the unreadable authors we bail on by page 10 out of exhaustion.
 
Last edited:
It's not hospitals, it's beds.

Where does nearly everyone die? In a damned bed. They're deathtraps.
 
Last year, a survey by Britain’s Society of Authors found that 20 percent of fiction and 25 percent of nonfiction writers were allowing generative A.I. to do some of their work.
😟🙁😞
 
Ha ha. Supposedly only the first paragraph of the article was AI. Your use of "tapestry" quoted the article. So no, not AI. Unless you told it to use "tapestry" in your directive.

Hopefully. :p
"Tapestry" is in the first paragraph, and is talked about by the author as a give-away for AI.
 
I was going to complain that many people are conceived in cars as well, but that is also one of the places people die unfortunately too often as well, so the symmetry holds
 
I noticed one of the books for sale on Amazon touted that it was written by a human. I guess that's something people would like to know now, since everything is being taken over by AI.
 
Back
Top