Using AI as a reviewer to improve my writing

Way back when, Google Translate used to have a provision in its Terms of Use that said, effectively, everything you uploaded became their property. I had to point out to clients several times that all those confidential client memos they were running through Translate weren't quite as confidential as they hoped.

Same for DeepL. It says the professional version won't use your texts, but I've seen evidence to the contrary.
Two people can keep a secret - if one of them is dead.
 
AI writing tools scare me. So much of what made the greats great was nuance and perspective. If authors sacrifice their integrity at the alter of grammatical perfection and computer algorithms we’ll end up with one story. Computers will write one perfect story according to analysis and language use and that will be it. There will never be mention of that awkward moment when two flies start fucking on a protagonists elbow while she asks chatgp for musturbation suggestions
 
I think the two things go hand in hand. The more you pay attention to other people, the easier it should be to empathize with them. The more you empathize, the more interested you will become in watching them.
And you come to realise that everyone has a story to tell. And frequently that story will break you.

Last night I was speaking to the woman we call my Mum's evil twin. She's known me since the day I was born, I used to run around with her son when we were six and four years old.

Last night is the first time she ever told me that he was not her eldest, and that her eldest lived less than a day.

Everyone has a story, and often there just aren't enough tears for it, so they keep it locked away so it can't hurt them.
 
Scientifically, emotions are biochemical responses triggered by conditioning. Some are encoded in our genes, while others are shaped by the environment. Free will is a myth.

A kitten separated from its mother at birth will still instinctively lick its paw to groom its head or bury its waste in sand. If behavioral knowledge can be encoded in RNA helices, why not in digital code?

Humans are easy to predict once you get what makes them tick. They are quite transparent to the keen observer. It doesn’t take profound empathy to figure out why some troubled soul, starved of validation, might dedicate their free time to a platform where they can excel and feel acknowledged.

Most people prefer the comfort of illusion to the discomfort of truth. They are conditioned to participate in a relentless pursuit of status and material possessions, projecting a facade of happiness and artificial fulfillment.

Should a person on the spectrum, possessing extraordinary computational skills but unable to tie their shoes, be considered intelligent?

Could AI, in theory, ever be creative or think beyond mere calculation and mimicry? The answer is yes. But for now, AI development is driven by financial incentives. The software market is worth trillions, while creative writing generates only millions. This economic reality ensures that groundbreaking advancements in AI creativity remain a distant prospect.
Humans are not computer programs. Even the most lowly of us "predictable" creatures can elect to behave entirely at odds with prior behaviours.

And none of that intrinsic nonlinearity is encoded in our DNA.

> Free will is a myth.

What a strange world you inhabit.
 
Humans are not computer programs. Even the most lowly of us "predictable" creatures can elect to behave entirely at odds with prior behaviours.

And none of that intrinsic nonlinearity is encoded in our DNA.

> Free will is a myth.

What a strange world you inhabit.
I think he's using AI. Feeds it "Make me sound like Literotica Yoda" and his posts are what comes out.
 
Me:
Can you review stories on the Literotica website?


ChatGPT:
I can provide feedback or analysis if you share the text or give me a summary of the story you’re referring to, but I can’t directly browse or access content from Literotica or other similar websites. Let me know how you’d like to proceed!
I suspect this would be called a 'white lie' by a human. ChatGPT is telling you it honours the 'robots.txt' file on the site (that blocks it) but I'm sure the text has been harvested in the building of earlier revisions of the underlying model.

See https://www.literotica.com/robots.txt
 
I suspect this would be called a 'white lie' by a human. ChatGPT is telling you it honours the 'robots.txt' file on the site (that blocks it) but I'm sure the text has been harvested in the building of earlier revisions of the underlying model.

See https://www.literotica.com/robots.txt
Literotica is one of the largest publicly available, curated sources of English Language smut going. It's naive to assume it's not a significant part of the corpus of most of these LLMs
 
I suspect this would be called a 'white lie' by a human. ChatGPT is telling you it honours the 'robots.txt' file on the site (that blocks it) but I'm sure the text has been harvested in the building of earlier revisions of the underlying model.

See https://www.literotica.com/robots.txt
I'm not sure if this is even about robots.txt (or other Literotica-specific exclusions) or just that it's running on a pre-trained model that doesn't have live access to websites.

What makes me suspicious is that GPT implementations often have a "was this response helpful?" button. The usual point of such things is to tweak the algorithm - give more of the responses people find helpful, fewer of the ones they don't - and it's hard to see how that could be accomplished without feeding back some of the session content.

Humans are not computer programs. Even the most lowly of us "predictable" creatures can elect to behave entirely at odds with prior behaviours.

The fact that many humans can write computer programs is pretty good evidence of that. We have evolved for flexibility, the capability to learn new things and go new places.
 
Scientifically, emotions are biochemical responses triggered by conditioning. Some are encoded in our genes, while others are shaped by the environment. Free will is a myth.

A kitten separated from its mother at birth will still instinctively lick its paw to groom its head or bury its waste in sand. If behavioral knowledge can be encoded in RNA helices, why not in digital code?

Humans are easy to predict once you get what makes them tick. They are quite transparent to the keen observer. It doesn’t take profound empathy to figure out why some troubled soul, starved of validation, might dedicate their free time to a platform where they can excel and feel acknowledged.

Most people prefer the comfort of illusion to the discomfort of truth. They are conditioned to participate in a relentless pursuit of status and material possessions, projecting a facade of happiness and artificial fulfillment.

Should a person on the spectrum, possessing extraordinary computational skills but unable to tie their shoes, be considered intelligent?

Could AI, in theory, ever be creative or think beyond mere calculation and mimicry? The answer is yes. But for now, AI development is driven by financial incentives. The software market is worth trillions, while creative writing generates only millions. This economic reality ensures that groundbreaking advancements in AI creativity remain a distant prospect.


This is why you're not a writer.
 
The fact that many humans can write computer programs is pretty good evidence of that. We have evolved for flexibility, the capability to learn new things and go new places.
Nah, my coding bugs were all pre-ordained.

Apparently...
 
Programs lack free will, and their predictability renders them uninteresting.
I'm curious what you think that makes you then.

Here's my opinion. Humans and certain other highly evolved species have enough intelligence to bypass their instincts. Thus: free will. No need for any All-Father

AI is a construct that is excellent at recombining things it already knows into things that appear novel but are clearly derivative when you know where to look. AI will not ever create anything new, nor will it ever attain free will because it is, ultimately, a tool. It just happens to be a very smart tool that occasionally tells you to glue crumpets to your horse in order to cure your dog's case of rabies.

Good luck with your dreams of the technological Singularity. I'll keep drawing in the dust with a stick, it's much more satisfying and I can always use the stick to light a fire.
 
Good luck with your dreams of the technological Singularity. I'll keep drawing in the dust with a stick, it's much more satisfying and I can always use the stick to light a fire.
Don't set fire to any stick while there are still idiots nearby that need hitting first.
 
Programs lack free will, and their predictability renders them uninteresting.
The human brain is a chaotic system, so not entirely predictable – individually.

As a generalised population we are usefully predictable and exploitable by the likes of Cambridge Analytica. And, unfortunately, they aren’t using AI to make our lives better by dynamically adjusting traffic lights.

A long term smoker has the free will to give up ‘cold turkey’. Many of them fail in their free will choice. An AI somewhere could tell you what the percentage are. Feed in more data about an individual – their current environment, brain chemistry, their peer group, their genetic propensity for habit forming behaviours, etc. – and it will be very accurate, but never perfect.

Humans are not computer programs. Even the most lowly of us "predictable" creatures can elect to behave entirely at odds with prior behaviours.
If an individual suddenly behaves erratically then you’d probably suspect something is wrong (hormonally, chemically, brain tumour, drugs, etc.). Or maybe it is just time for their mid-life crisis? Will they buy a motorcycle or a guitar? Their genetics and upbringing will have already decided that. The make and model will be harder to predict, but when Google sees them search, it may be able to influence their decision with a well placed ad.
 
There are those rare individuals, but the rest are quite predictable. For instance, I knew exactly how and who would react.
You must be really special! I don't know anyone else who can say they could predict people reactions online. I bet you have loads of fans and tonnes of girlfriends!
 
🤔 I wonder what that’s supposed to mitigate. The forum is crawling with robots 24/7.

The first eight lines list the user-agents who should not crawl any of the site (anything below root "/"). So ChatGPT is denied access to everything.

The site allows (and wants) stories to be indexed by search engines like duckduckgo, so for everything else it only excludes the uninteresting internal site workings (API, Tags, CSS, etc.). Crawling the stories is allowed.

Robots.txt is an honesty system. Bots can ignore it if they are programmed that way.
 
Back
Top