Well-written vs grammatically correct

Well Written = a story or essay written in a manner that is enjoyable to read, and is functionally correct. It leaves the reader with no unintended questions in mind, raises emotions in the reader that the writer intended and entertains or informs the reader of exactly what the writer intended.
 
I ghostwrite romantic fiction for other authors. In doing so, I've researched how best-selling authors write and developed a series of metrics to help me measure whether the prose does the work. Bear with me. I write whole novels, not stories. These areas are what I find to carry the story:
1.) Planning. Yep, I plot, perhaps to a ridiculous degree. In my story plan, I work with the client's proposed word count. I chose the number of chapters to render the story and divide that number into the word count to get the word count per chapter so I can render the:
2.) Structure—Each chapter has a specific job to do. Some people call this aspect story beats. This keeps the story on track with:
3.) Pacing—Readers want to see the story progress along its natural course. They also want to see the chapter progress and to this end I divide the chapter into three parts and choose an emotion to govern that section, then escalate it to the next more intense emotion for the next section. Etc. If you have structure and pacing down, you have conquered half the wrangling of the story.
I work in Google Docs, which allows me to use a couple grammar programs at once, so as I write, I make the grammar corrections. I know people wait until after, and if that works for you, that's fine. I despise editing for grammar, so I reduce the need by working as I write.
After I finish the chapter I look at these metrics:
a.) Lexical density—the percentage of words with meaning as opposed to filler words. Best-selling authors regularly hit 52% of lexical words.
b.) Pronoun percentage. This is my biggest problem, so I have to watch it. Less than 15% as calculated by Pro-Writing Aid.
c.) Pro-Writing Aid score, which measures spelling, grammar and writing clarity. 100% if possible.
d.) Auto-Crit Score. Anything above 80 in the summary evaluation is good. Also, it points out problem areas to address, such as repeat word.
e.) Weak verb percentage as noted by the free online program Expresso-App.org. This should be less than 40%. This ensures your prose sparkles instead of reading flat.

Readers, I've found, can overlook some things, but keeping the words and pacing engaging is what keeps them coming back for more.
I love Pro-Writer!
 
"Well written" to the readers: I was titillated and/or you didn't poke my no go zones. (gay, incest, shame, specific fetishes, whatever they can't handle.)

Myself: Characters. Living, breathing, existing characters full stop.

Their thought patterns are congruent with who they are and drive their resultant actions.

I understand the technicals of a well structured story that follows well tested paths but while I work on those, I will always prioritize characters who seeming "exist" except for the small annoyance that they don't have corporeal form.

If I "remember" a character I never met and they spring to mind seemingly without provocation, that's was an exceptionally well written story.
 
Nabokov insisted that good writing required three aspects. A writer needed to be:

A good teacher

An engaging storyteller

An enchanter
Quite so, though the original order was (as put forth in Nabokov's essay "Good Readers and Good Writers"):
There are three points of view from which a writer can be considered: he may be considered as a storyteller, as a teacher, and as an enchanter. A major writer combines these three—storyteller, teacher, enchanter—but it is the enchanter in him that predominates and makes him a major writer.

As to the role of the "teacher," contrasting it to that of the "storyteller," Nabokov says at the end of the essay:
A slightly different though not necessarily higher mind looks for the teacher in the writer. Propagandist, moralist, prophet—this is the rising sequence. We may go to the teacher not only for moral education but also for direct knowledge, for simple facts.
The teacher is the facet of the writer which imparts the "lesson" of a story to the reader.
 
Of the three, Nabokov thought enchanting the most important quality. Word magic, seducing the reader, luring you in, keeping you enthralled, wanting more.

I had never read that essay before. It's a good distillation of Nabokov's approach to fiction. The essay itself is a great example of the fun Nabokov has with words, maybe his best trait as a writer.

I agree with him about the importance of being an enchanter as a writer.
 
e.) Weak verb percentage as noted by the free online program Expresso-App.org. This should be less than 40%. This ensures your prose sparkles instead of reading flat.

Readers, I've found, can overlook some things, but keeping the words and pacing engaging is what keeps them coming back for more.
I looked at Expresso-App, and dropped 3,000 words into it. One of its indicators is "rare words" - defined as those outside the "most used 5,000 words in the English language."

When a program thinks the days of the week are "rare words", and calculates a rating score which counts them in, I'm going to have real problems taking any of its other scoring criteria on board - trying to parse what's meant be good writing indicators versus bad. That's too much work for me. It also included my character's names in the same parameter count, at which point I gave up.
 
I looked at Expresso-App, and dropped 3,000 words into it. One of its indicators is "rare words" - defined as those outside the "most used 5,000 words in the English language."

When a program thinks the days of the week are "rare words", and calculates a rating score which counts them in, I'm going to have real problems taking any of its other scoring criteria on board - trying to parse what's meant be good writing indicators versus bad. That's too much work for me. It also included my character's names in the same parameter count, at which point I gave up.
And an 'Entity substitution' means a pronoun. Lots of play value there. I'm not sure how much return I'll get from studying what its metrics tell me. I can see where it's trying to go, but not sure how useful it is yet.
 
And an 'Entity substitution' means a pronoun. Lots of play value there. I'm not sure how much return I'll get from studying what its metrics tell me. I can see where it's trying to go, but not sure how useful it is yet.
Yes, that was my thought. I'm sure if you puzzled at it, it would become clearer, but as it stands, its clutter when trying to analyse my clutter gets in the way of zeroing in on what's useful.

I'm lazy. I want clever shit software like this to draw me some pictures or something, to focus my attention. I don't have the patience to analyse the analysis. And when you drill into the detail, when it's saying you've done something evil, it says, "Well you did it four times". And I say, what the fuck, in three-thousand words, that's not important; that's not even noise.

At least 99.8% of my sentences are active voice, so that's a plus.
 
And when you drill into the detail, when it's saying you've done something evil, it says, "Well you did it four times". And I say, what the fuck, in three-thousand words, that's not important; that's not even noise.
To put it into a metaphor... when the whistle blows and you are up the ladder out of the trenches, the very last thing you should be heeding is any of this stuff. The flow of the prose is the thing. As the Field Marshall noted, no battle plan survives contact with the enemy.
 
To put it into a metaphor... when the whistle blows and you are up the ladder out of the trenches, the very last thing you should be heeding is any of this stuff. The flow of the prose is the thing. As the Field Marshall noted, no battle plan survives contact with the enemy.

Or, to quote Mike Tyson, "Everybody has a plan, until they get punched in the face."
 
I ain’t the bestest at grammar. And my, punctuation; could (be better?). Also my vocabulary and spelling are both a little out there.

But…

I still get compliments about stories being well-written.

I know bad grammar and, misplaced, commas are very, frustrating for some. But what does well-written mean to you?

To me it means evocative, drawing you in, making you feel or care (or both), painting a vivid picture, making you think, reconsider maybe. Basically leading to you wanting to read more.

I’m not advocating for anarchy with writing rules, but what does well-written mean to you?

Em
Because I'm one of those who is a stickler for grammar, especially spelling, I love it when both are true of a story. For me, it means that not only did the author think about more than just the words they are using, but they are also being intentional with them. Every word, bits of context, descriptions, details, etc., all have a part to play. With a story that has me at the edge of my seat and eyes glued to the page/screen, the LAST thing I want to see is a missing or misspelled word. That's when I lose my focus of the story, and I start paying more attention to grammar than content.
 
A superior story is like a well finished piece of cabinetry.

Not just pleasingly and functionally designed, but well sanded, no rough edges, impeccable mortised structure: all evidence that the builder took great care in all details of the creation.
 
I looked at Expresso-App, and dropped 3,000 words into it. One of its indicators is "rare words" - defined as those outside the "most used 5,000 words in the English language."

When a program thinks the days of the week are "rare words", and calculates a rating score which counts them in, I'm going to have real problems taking any of its other scoring criteria on board - trying to parse what's meant be good writing indicators versus bad. That's too much work for me. It also included my character's names in the same parameter count, at which point I gave up.
Okay, since this thread picked up on Expresso, please let me express (hah!) a few words about it.

1.) The guy who created this program, Mikhail Panko is a computer programmer, not an English major. He thinks like a programmer. This is what he says about Expresso:

Expresso is an interactive tool to analyze, edit and compare text styles in English created by Mikhail Panko. The first version was built in 2014 on top of the Natural Language Toolkit library. In 2019 Expresso was upgraded to use spaCy — a powerful and fast library for advanced Natural Language Processing, which relies on neural network models. The new NLP library allowed to speed up text analysis, improve metrics accuracy, and add new metrics based on syntactic structure of sentences.

As a professional writer, I find this highly interesting, because my job is to provide a marketable novel in a short amount of time (usually one a month, and please hold the snickers back. The vast majority of my clients rate my work as 5 star work.) and I'd like to know what works before I metaphorically put the pen to the page. I don't have time to thrash through what makes a particular story work. I'm not saying that everyone needs to adopt this process. Writing is a journey; not a destination. You do you.

2.) And this is most important, the creator of the app did not provide any metrics for using it because he is not an English major. The app provides raw data, and you can take it the way you want to. I get my metrics from my personal study of best-selling authors and how they use words. All I can speak from is results. I have a wait list for clients who want to get on my writing schedule, and other clients who refuse to let me go.

3.) The terms "rare" and "common" are specific and probably indicators of programming code that reads: "this word has been identified as one of of X number of common words in a common words list and everything not this word is "rare." It doesn't mean that it is rarely used.

4.) I find the "weak verb" percentage the most useful part of this program. I was not an English major, and I don't have an MFA. Everything I learned about writing came from the school of hard knocks.

ChatGpt can do that, yet not everyone admires its product.
Observations about ChatGPT. I've been working with ChatGPT, and recently begun to use QuickWrite more. I like Chat for the more technical aspects of writing, such as summarizing a text, writing location descriptions, and bouncing ideas off it for my science fiction writing. Anthropomorphizing Chat, it seems to get cranky sometimes. I know, I know, it's not possible. But it has its "Sorry, I can't do that, Dave," moments. For instance, when I entered a specific prompt, it told me it couldn't process it because it was too specific. When I used the exact same prompt in QuickWrite, it produced results that met my specifications. Then I popped the text QuickWrite produced into Chat and asked it to analyze its prose effectiveness. Chat said (paraphrasing here,) it hit all the elements for effective prose.

Then I informed it that another AI wrote it, Chat's response was disbelief:

"It seems that the previous text was not generated by the prompt you provided."

Then I asked why Chat couldn't follow the prompt and another AI could, Chat replied:

"I apologize for any confusion caused. As an AI language model, ChatGPT should be capable of following the provided prompt and generating a response accordingly. (How many lies, Chat? How many lies?) However, due to the complexity of language models and the variations in training and algorithms used by different AI systems, there can be differences in performance and output."

So Chat told me, probably without meaning to, that it should have followed the prompt as written, but didn't, and didn't say why. However, and this is most interesting, when I returned to Chat later in the day, it overcompensated with a description with many more details than usual without being prompted for them. Yes, something very weird is going on with ChatGPT.
 
Last edited:
Expresso is an interactive tool to analyze, edit and compare text styles in English created by Mikhail Panko. The first version was built in 2014 on top of the Natural Language Toolkit library. In 2019 Expresso was upgraded to use spaCy — a powerful and fast library for advanced Natural Language Processing, which relies on neural network models. The new NLP library allowed to speed up text analysis, improve metrics accuracy, and add new metrics based on syntactic structure of sentences.
There you go. An analytic tool that you need to analyse to benefit from - this stuff is meant to help me, not make my life harder.

I'll stick to what I know best, knowing how I write and damn the statistics!
 
I was criticised for characters whose grammar was poor. People don’t speak grammatically correctly. If you write people who speak like that, they better be stodgy old literature lecturers.
Or non-native speakers who've learned "proper" English but haven't yet figured out exactly how much informality they can get away with.
 
Then I informed it that another AI wrote it, Chat's response was disbelief:

"It seems that the previous text was not generated by the prompt you provided."

More precisely, GPT's response was to mimic the way in which humans might express disbelief.

It doesn't have the capacity to believe or disbelieve such things itself. It's not doing a fancy analysis of the text you got to figure out whether it really could have been generated by that prompt. It just knows that sometimes when somebody says something like your prompt, the answer is some variation on "I believe that", and sometimes the answer is some variation on "I don't believe that", and it randomly chooses one of those (with probability based on how common each of those replies is). Its job is to tell you what an answer might look like, not to tell you what the right answer is.

It gets the answer right quite often, because it's been trained on a huge amount of data and for any given question there's a good chance that it's seen something like it, close enough that it can copy that answer. But it's not guaranteed.

Then I asked why Chat couldn't follow the prompt and another AI could, Chat replied:

"I apologize for any confusion caused. As an AI language model, ChatGPT should be capable of following the provided prompt and generating a response accordingly. (How many lies, Chat? How many lies?) However, due to the complexity of language models and the variations in training and algorithms used by different AI systems, there can be differences in performance and output."

So Chat told me, probably without meaning to,

Absolutely without meaning to. GPT does not mean anything by its answers. It has no aims beyond mimicking the kind of texts/conversations it's seen in its training (plus a filtering layer that attempts to prevent it from producing naughty outputs, not very reliably).

However, and this is most interesting, when I returned to Chat later in the day, it overcompensated with a description with many more details than usual without being prompted for them. Yes, something very weird is going on with ChatGPT.

It has a random component to the way it generates text, and it doesn't just randomise one word at a time; it does it at levels of style and structure too. Sometimes it's going to mimic the McGonagalls of the world* and sometimes it's going to mimic the Hemingways, in the same kind of way that when you throw two dice sometimes you're going to get boxcars and sometimes you're going to get snake eyes. That it happened to pick a verbose style on that occasion doesn't mean it was feeling bad about its previous failure and trying to compensate for it; it's just the way the dice came up.

People love to personify anything that appears remotely human, and lots of things that aren't even that. We assign names and personalities to dolls and cars, we get angry with our dishwashers when it feels like they're trying to fuck with us, even though we know none of those things have a mind. Even for something that we built with our own hands, all it takes is a couple of googly eyes on it and suddenly we feel this urge to smile at it and say hi as if it could reciprocate.

With GPT, because most people have no idea what's going on under the hood and because it talks in ways a human might talk, it's very easy to start attributing human motives to it, assume that it's saying these things for the same reasons that a human would have said such things. But that's not what's going on.

*William Topaz, that is.
 
Thank you for mansplaining to me what I knew. And if that sounds harsh, your response came off as "that silly woman is anthropomorphizing a machine."

I certainly understand that it produces predictive text in response to input text, and the creators have trained it to return responses that human prefer.

I also know from experience that ChatGPT returns text that is too mechanical. It doesn't seem to have a choice in that. The sentence lengths are the same, and it rejects specific prompts to correct this. Without a specific prompt it will "tell" rather than "show." Even with a prompt not to "tell" it still does a fair amount of "telling." So no, it doesn't write like " McGonagalls of the world* and sometimes it's going to mimic the Hemingways."

I was sharing my experiences with it, and if it doesn't work for you, that's fine.
Here is a good interview with the creators of ChatGPT. https://www.technologyreview.com/20...-story-oral-history-how-chatgpt-built-openai/
 
Thank you for mansplaining to me what I knew.

All I know of you, all I can react to, is what you write here.

When you write "probably without meaning to", the obvious interpretation is that you aren't sure whether "meaning to" is impossible. People don't generally hedge with "probably not" when they mean "certainly not", especially not professional writers who communicate for a living.

Likewise, when you talk about GPT "overcompensating" after previously failing to respond to a prompt, the obvious interpretation is that you do believe it is trying to compensate for that earlier failing (which would be a plausible interpretation of a human behaving that way, but not for GPT). If that is not what you intended by that word, I'm not at all sure what you did mean by it.

I responded on the basis of what you've written. If I underestimated your knowledge of the technology, consider the possibility that your words didn't clearly express that understanding.

And if that sounds harsh, your response came off as "that silly woman is anthropomorphizing a machine."

I did think you were anthropomorphising it, yes. Both because anthropomorphising things is a near-universal human tendency - you'll notice I talked about how "we" do these things, a pronoun that includes myself - and because you, specifically, used anthropomorphic terminology like "disbelief" and the examples noted above.

But I'm not at all clear on where you're getting the "silly woman" bit from. Could you elaborate on why it reads that way to you?

Without a specific prompt it will "tell" rather than "show." Even with a prompt not to "tell" it still does a fair amount of "telling." So no, it doesn't write like " McGonagalls of the world* and sometimes it's going to mimic the Hemingways."

You referred to GPT giving "a description with many more details than usual", apparently suggesting that it was doing this to compensate for an earlier failure. I invoked McGonagall and Hemingway as examples of two writers who are at opposite ends of the descriptivity spectrum; feel free to substitute "sometimes it picks a more descriptive style and sometimes less descriptive" if those specific examples are objectionable.

I was sharing my experiences with it, and if it doesn't work for you, that's fine.

Really not sure what part of my post you're responding to here. I don't think I'd ventured much of an opinion on the merits of GPT as a writing tool in my response here. (In other threads about GPT, I certainly have done, but not here in response to you.)
 
Back
Top