Seeking Advice - German Literotica Community Facing Mass AI Rejections

The problem isn't really the AI, but the breakdown in communication between operator and user. If a dialogue could be established here, a quick solution could be sought. But silence is the order of the day here. This is how the evil runs its course, and brain drain occurs, which will harm the site. On the other hand, this is how new platforms develop; perhaps we simply have to look at it that way.
 
... But if it were only marketed for those purposes, the manufacturers would go broke, so they mislead people about what it is and isn't fit for.
Good news: they are broke. It just doesn't matter during a bubble. OpenAI, the leading company, has revenue of $12 billion annually. Revenue, not profit, there's no profit. They're valued at $500 billion. They only continue to operate because nVidia props them up with huge investments, which some analysts would argue is done to prop up nVidia chip prices.

--Annie "Just learned those numbers this morning"
 
As if misleading marketing was exclusively done by tech-companies...

Where did I ever say it was?

and you said it right there:

makes it really easy for people to generate untrustworthy-but-convincing content at high volume.

It's not the tech. It's the people using it.
It is, in fact, both. If I make cluster bombs optimised for blowing off children's hands, I don't get to absolve myself by telling myself that "it's not the tech, it's the people using it".

When the design of the tech makes it particularly easy for shitty people to use it for shitty purposes - relative to any good it might do - then the designers and the promoters of the tech absolutely do bear responsibility. (The shitty people using it also get to bear responsibility. There is no law of ethics that says only one person can be blamed.)

When the design of the tech also makes it easy for well-meaning people to use it in good faith and get dangerous but plausible-seeming garbage, then the designers and the promoters bear responsibility for that too.

You don't need to be enthusiastic about AI. You simply need to acknowledge that it is not going away anytime soon, instead of playing the blame-game.
Once again, you are overlooking the power of "both".
 
Good news: they are broke. It just doesn't matter during a bubble. OpenAI, the leading company, has revenue of $12 billion annually. Revenue, not profit, there's no profit. They're valued at $500 billion. They only continue to operate because nVidia props them up with huge investments, which some analysts would argue is done to prop up nVidia chip prices.

--Annie "Just learned those numbers this morning"
Yep. Unfortunately the splash radius when this bubble bursts is going to be huge.
 
Where did I ever say it was?
If it's not "a special feature" of the "techbros", I can't see why they should be held to any other standard than any other company on this planet - concerning marketing, of course.

It is, in fact, both. If I make cluster bombs optimised for blowing off children's hands, I don't get to absolve myself by telling myself that "it's not the tech, it's the people using it".
The comparison is highly fallacious. Or - can you tell me any good use for cluster-bombs? Fishing?

When the design of the tech makes it particularly easy for shitty people to use it for shitty purposes - relative to any good it might do - then the designers and the promoters of the tech absolutely do bear responsibility. (The shitty people using it also get to bear responsibility. There is no law of ethics that says only one person can be blamed.)

When the design of the tech also makes it easy for well-meaning people to use it in good faith and get dangerous but plausible-seeming garbage, then the designers and the promoters bear responsibility for that too.
You can use all tech for bad purposes, be it computer, cars, youtube, you name it...

Once again, you are overlooking the power of "both".
Once again, the blame-game seems to be more important to you than facing reality. It is here to stay and is not going away anytime soon. We all have to learn to deal with it.

Sure, in the worst-case-scenario they might have openend pandoras box. So? If "they take responsability for it", will it change anything?

But, probably, in the end you and me will have to agree to disagree about our views on this topic. To you the last word on the matter.
 

To be honest: AI has made my life easier: At work, as well as writing my stories. Just to clarify: I write every single word of my stories myself, but I do use AI for research, analysis, feedback….
To come full circle. Perhaps you really used AI and typed in or otherwise altered your writing based on the AI feedback. Unknowingly or unconsciously perhaps. But this comment changes at the very least my perception of your innocence here.

But clearly there seems to be an issue, since there are so many others in the German author community having trouble, whose claims of no AI are actually no AI.
 
To come full circle. Perhaps you really used AI and typed in or otherwise altered your writing based on the AI feedback. Unknowingly or unconsciously perhaps. But this comment changes at the very least my perception of your innocence here.

But clearly there seems to be an issue, since there are so many others in the German author community having trouble, whose claims of no AI are actually no AI.
Of my ... innocence? They are still my words, my thoughts - my story. I conceived the sentences, the plot. I didn't enter some prompt and copy the output of the machine into my story. Was I influenced by it? Sure. But then, where is the line drawn? Do we need to become some sort of ... human purists?

"Fun" fact: Lit states that it wants "stories written by humans, for humans". But it is a machine that decides, if a story is "human enough". Not just that: Some users (in the german part of the forum) have found a "solution" to the AI-rejection-problem - by using so-called "humanizer-programs" to go over their stories, before submittig them (or after having them rejected). Meaning - they let a machine write "human" sentences for them, to pass the test of another machine. And this, takes Lits claim to absurdity...
 
"Fun" fact: Lit states that it wants "stories written by humans, for humans". But it is a machine that decides, if a story is "human enough". Not just that: Some users (in the german part of the forum) have found a "solution" to the AI-rejection-problem - by using so-called "humanizer-programs" to go over their stories, before submittig them (or after having them rejected). Meaning - they let a machine write "human" sentences for them, to pass the test of another machine. And this, takes Lits claim to absurdity.
This is beyond sad. The irony is lost on the Luddites. They won’t change their minds, not even when the ship sinks and they find themselves swallowed by the beast.
 
If it's not "a special feature" of the "techbros", I can't see why they should be held to any other standard than any other company on this planet - concerning marketing, of course.

Absolutely. Anybody who misrepresents their product as badly as the current wave of LLM marketers misrepresent theirs should be shamed, and probably jailed.

The comparison is highly fallacious. Or - can you tell me any good use for cluster-bombs? Fishing?
See, if you really believed AI was good at research you wouldn't be asking me that question.

You can use all tech for bad purposes, be it computer, cars, youtube, you name it...
 
My new story section has now been rejected for the second time. So I will now stop my work in German erotica. It was a wonderful year, I gained a lot of experience, so it wasn't all in vain.



I just feel sorry for my readers. But all I can do is point out the problem and hope for change.



I wish you all the best and thank you once again for trying to help.



Regards, Jan
 
My new story section has now been rejected for the second time. So I will now stop my work in German erotica. It was a wonderful year, I gained a lot of experience, so it wasn't all in vain.



I just feel sorry for my readers. But all I can do is point out the problem and hope for change.



I wish you all the best and thank you once again for trying to help.



Regards, Jan

I really hope you will reconsider, let the current mess settle for a month or so, and then try again :( But either way, take care!
 
"Fun" fact: Lit states that it wants "stories written by humans, for humans". But it is a machine that decides, if a story is "human enough". Not just that: Some users (in the german part of the forum) have found a "solution" to the AI-rejection-problem - by using so-called "humanizer-programs" to go over their stories, before submittig them (or after having them rejected). Meaning - they let a machine write "human" sentences for them, to pass the test of another machine. And this, takes Lits claim to absurdity...
Do you know this for certain? Because if it's true, it's beyond sad. It means that, as much as I support the no-AI policy, trying to detect it is a losing strategy, and it kinda throws a wrench into the claim that Lit's AI-detector does a good job, which I also disputed before.

But what's the alternative?
 
The alternative would be a warning notice stating that the written text could be AI. This would give readers the choice and allow them to form their own opinion. In the current situation, however, it degenerates into book burning like in the Third Reich.
 
Back
Top