bots writing stories II

I'm sure they have. I doubt many got professionally published, though.
I amused myself in the early days of microprocessors by margin-correcting numerous tech guides, especially those published (for money) by Sybex. I'll attest that much undecipherable crud has made it onto the market. And we know some scholarly journals have published utter gibberish. And folks buy it. As a savant (HL Mencken?) said, nobody ever lost money by underestimating the intelligence of of the American people.

I won't cite presidential tweets but the essay above is clear in comparison.
 
FWIW, here is the "competent, really well-reasoned essay" they're talking about:

this qualifies as a good essay. i suspect that the AI writes the essay and then has it machine scored until it gets a good one.

that being said, my interest is more along the lines of stories and erotica. Even if the stories aren't perfect, would you think they'd rate a 4.5? the AI is getting spelling, grammar and context right. i know what i like and i know what authors i like and sadly my favorite authors rarely nail my favorite kinks. could this AI reliably produce derivative fiction that fits my kinks? I envision an app that will allow a reader to drop in a few 'tag prompts' and voila a unique original story created for me. Rather than buying a single kindle book, buy an app to churn them out.
 
this qualifies as a good essay. i suspect that the AI writes the essay and then has it machine scored until it gets a good one.

that being said, my interest is more along the lines of stories and erotica. Even if the stories aren't perfect, would you think they'd rate a 4.5?

I can see it writing sex scenes OK. That may be a threat to the "pure stroke" end of the spectrum here. But I don't see it writing meaningful plots without some major advances. I'm not sure if the current model even has the potential to do that, since from the description it doesn't seem to be based on any actual understanding of what it's writing about - it's more at the level of "this kind of writing puts these kinds of words together in this kind of way".

the AI is getting spelling, grammar and context right.

It's certainly better at context than I've seen before from AI, but even in those short samples it's far from perfect. For instance, the sentence where it tries to explain "unsupervised learning" is actually describing supervised learning.

This is a good example of how "AIs" of this kind can struggle with context. Pretty much any intro-level text on AI will introduce the concepts of supervised and unsupervised learning side by side, because they're both important and the distinction is also important. But what the AI sees is that these two things are often discussed together, so it figures out that it can simulate an article about AI by putting them together in a sentence... without understanding that they are related by being opposites.

i know what i like and i know what authors i like and sadly my favorite authors rarely nail my favorite kinks. could this AI reliably produce derivative fiction that fits my kinks?

That could be a challenge. Because the basis of this method is "look at eight million web articles written by humans and then learn to put words together in a way that simulates those texts", it might be hard to write fiction about (say) balloon fetishes unless it already has a corpus of human-written stories about balloon fetishes. And if you already have a corpus of thousands of human-written stories of the type you want... why bother with an AI?

One thing it might be able to do is hybridisation - train it on balloon fetish stories and incest stories, and then ask it to write a balloon fetish incest story to fill that particular gap in the corpus. Without knowing the inner details of the method it's impossible to say for sure, but I wouldn't be surprised if that was possible.
 
Personally, I'm not going to be concerned until the AI questions why it's writing, and to what end. 'Course, that's basically Robot Rebellion stage, so.....
 
Ben Franklin in Paris reputedly witnessed the first balloon ascents. A fellow bystander asked what use such nonsense was. Ben supposedly replied, "Of what use is a newborn baby?" AI is still pretty young, and lots more useful than in decades past. Expect AI to evolve faster than humans can handle. Don't sass your robot masters -- not prudent.

Prediction: Most people won't know or care when most presentations are deepfakes. Those who care will be marginalized. Order will be maintained. Submit, hu-man!
 
Ben Franklin in Paris reputedly witnessed the first balloon ascents. A fellow bystander asked what use such nonsense was. Ben supposedly replied, "Of what use is a newborn baby?" AI is still pretty young, and lots more useful than in decades past. Expect AI to evolve faster than humans can handle.

I don't doubt it. Working with AI is part of my job. I'm no expert but I collaborate with people who are are, and I have some basic knowledge in the area - enough to know that different approaches are suited to different tasks.

I'm commenting specifically about the potential for this particular AI (GPT2) to replace a specific activity, not attempting to pour cold water on AI in general or the thing that GPT2 does do really well - which is imitating style. That alone has some very worrisome implications, but if it works as they've described*, replacing the market for human-written fiction is unlikely to be one of them.

The next AI approach? Maybe. Probably not this one though.

*Noting that currently, we only have the creators' word for it that this AI even exists, and no independent testing of its strengths and weaknesses. AI being what it is, and E**n M**k's appetite for publicity being what it is, it might be prudent to assume people are hyping their product at least a little.
 
I don't see it replacing human stories, just supplementing it.

I suspect we eventually will handle the potential problem of AI surpassing human intelligence by fusing with it -- incorporating it into our own in some way. That's unsettling for different reasons but I think it's inevitable.
 
Then there's the report from The Guardian (UK): New AI fake text generator may be too dangerous to release, say creators
The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed “deepfakes for text” – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.

OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.​
Like hand-dipped candles, writing by humans will be obsolete in the next generation.

Or the AI will start to take over the world with it deep fake news articles that have the humans fighting wars with each over what it reports to the public.
 
Or the AI will start to take over the world with it deep fake news articles that have the humans fighting wars with each over what it reports to the public.
Humanity has less than a decade to survive.

No, really. Given that hu-mans invariably fuck things up, some jerk will gen a fucked AI that goes hog-wild and provokes conflicts, for lulz. If a deep-fake newscaster or advisor convinces a compliant leader to nuke somebody, we're gone. Everyone might as well send me all their money now because it'll be worthless soon.
 
Some of you may be interested in a similar AI theme. IBM has built an AI debater and I was most impressed when I watched the debate live at IBM Think 2019.

The whole debate is available on YouTube: Https://youtu.be/m3u-1yttrVw
(Warning it is an hour long).

For those that want the short attention span theater version, the AI lost to the human but was very impressive.

James
 
I read of the early days of automobiles. Nobody circa 1900 could foresee how proliferating road vehicles would form and dominate our current world. Susan B Anthony said bicycles did more than anything to liberate women -- but bikes didn't produce suburbs and oil wars, and bikes weren't portable fuck-parlors.

Motorcars were first built before 1800 but weren't really viable till circa 1900. AIs were first made in the 1950s; they've only recently become commercial tools. We can expect that by 2050 AIs will be as transformative as cars were by the 1950s. They will totally change how society works.

Expecting AIs to take over writing is justifiable paranoia. You WILL digest what your robot masters provide. Resistance is futile.
 
But wait -- there's more!

I just found the Plagiarism Today site. Among the news and views is this article, How AI Will Change Authorship and Plagiarism.
Though the idea of robots writing school papers might seem to be the realm of science fiction, the truth is robots are already writing content. In September 2017, the Washington Post announced their AI reporter, dubbed Heliograf, had penned some 850 stories in the prior year. This included some 300 reports from the Rio Olympics.​
Bots are coming for us all. Run!
 
Back in the 1970s my organisation developed a mainframe system for assessing faults in electronic equipment.

The user chose the item of equipment and was presented with a list of potential faults. After selecting the one closest to the observed fault, the system produced a list of corrective actions with a percentage of likely success. All options ended with 'return to supplier and replace' if fault was too expensive or too complicated to fix.

In its first year the system saved hundreds of hours of fault diagnosis and repair time and got equipment operating far quicker. Every fault added to the data base and identified unreliable equipment for redesign.

It was AI. We just didn't call it that but an aide for engineers.
 
If AI could write as well as people, how would that be a problem from any perspective except writers? If AI stories were just as good, we could all read as many well-written books as we wanted. Our favorite authors would never die. They'd just have their work taken over by computers.

I think it's a big "if" that AI can do that. I have yet to see the receipts. But, I feel like the concern here is more with the imperfect reader. It sounds like the worry is that readers aren't discerning enough to tell the difference in quality, the assumption being that there is a difference in quality. (If there's no difference, then we're just being selfish, right?) The imperfect reader is always a factor, with or without AI to worry about. After all, we're all imperfect readers.

When it became possible for writers to self-publish, I'm pretty sure that writers with publishers were sure that imperfect readers wouldn't be able to tell a difference in quality between their work and the work of the unwashed masses of the self-published.

If AI starts writing that well, I think that readers will start looking for those things that make stories uniquely human. Nothing is valuable in mass quantities. If it started raining diamonds, people would get tired of having to rake them off their lawns.
 
If AI starts writing that well, I think that readers will start looking for those things that make stories uniquely human. Nothing is valuable in mass quantities. If it started raining diamonds, people would get tired of having to rake them off their lawns.
What will AIwriter do when it first gets a one-bomb, that's what I want to know. Will it be able to compute incomprehensible human behaviour, or spin around in circles until it disappears up its own silicon chip?
 
Back
Top