astuffedshirt_perv
Literotica Guru
- Joined
- Jun 22, 2002
- Posts
- 1,325
Rather than resurrect the old thread, here we are 7 months later with a newer iteration of writing bots.
OpenAI just released GPT-3. You can read samples at this link. Full paper available here. Or search Github or reddit for GPT-3.
Some lines from the executive summary (emphasis added by me):
Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. ... Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. ...Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans.
Good news, I guess, is that horsepower needed to make this go is beyond most home hobbyists. Bad news, I guess, is that giving Amazon this kind of tool could end the self-publishing game. Amazon already tracks kindle romance novels down to which paragraph the characters should first kiss.
OpenAI just released GPT-3. You can read samples at this link. Full paper available here. Or search Github or reddit for GPT-3.
Some lines from the executive summary (emphasis added by me):
Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. ... Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. ...Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans.
Good news, I guess, is that horsepower needed to make this go is beyond most home hobbyists. Bad news, I guess, is that giving Amazon this kind of tool could end the self-publishing game. Amazon already tracks kindle romance novels down to which paragraph the characters should first kiss.