Why Your Story Rating Idea Sucks

E.g. why do I have 3x the readers for Chapter 5 than for chapters 2, 3, or 4, and more for Chapter 9 than any of them? I suspect the description helped for the former (Ade gets fucked) and the tags for the latter (included 'fisting' and 'vaginal fisting'), but clearly liking one chapter didn't persuade many readers to see what they'd missed. A fascinating insight, I feel.

Something as trivial as the position of the story on the New Stories view can make a big difference. New stories seem to go up in batches; if your story is last in its batch then it will sit there right at the top of the page for several hours before the next batch displaces it. But if it's first in its batch, it will go way down the page, sometimes even starting out on page 2. In my experience that kind of thing makes a BIG difference to the view counts.

Also, category. Your Chapter 9 is in Erotic Couplings, whereas most of the others are in Gay Male, and different categories get vastly different numbers.
 
I heartily agree with this one. Even without malicious bombing, new stories don't usually have enough votes to give a decent estimate of the long-term score.

It's set now for not listing a rating for the first nine votes [Later. This is off. The rating doesn't appear on the general "New" list at all, but I see that it shows on the hub New list. The author's page is where it has to get 10 votes to show. So, the system isn't consistent]. It would be a tricky programing fix, I think, to have it show only once it goes off the New List, no matter how many votes it then has garnered. The change would be coming at varying rates for each story.

Each time this comes up (every three weeks?) I float the two seemingly simple and most effective changes to make. Either drop the red H system now that the actual rate is showing on the lists, or add a color marking--4.50 and above "sizzling" and, maybe 4.25 to 4.49 "hot."

Beyond that, the utility of discussing this every few weeks is close to zilch. The Web site authorities don't appear either to be listening to suggestions on much of anything or caring what the users think.
 
Last edited:
For me, it would be sufficient if it's only on the New Story lists that the scores don't show up; that shouldn't be too difficult.

Do you mean on the hub lists? Because there are no scores given on the general New list now--at least in the view I get.
 
An advanced review?

As a help to authors, it would be nice to have an 'advanced review' option. It would not need to be part of the forward facing rating system, but it could provide useful feedback with a list of questions and room for comments and suggestions.

I've taught technical workshops to tradesmen for many years. At the end of the classes we always ask participants to fill out a review page. It has been instrumental to improving the programs.

It could be ignored or used by any reader and any items could be left blank. Feedback would only be given by those interested enough to take the time -- possibly mostly by other authors. Having so many categories of feedback allows the author to make an assessment of the quality of a review. The story index could have a link to the advanced ratings.


Sample:


Rate the following 1-5:

Over all rating:
Grammer/spelling :
Style:
Character developement:
Scene description:
Flow:
Erotic content:
General story telling:
Dialogue:

Rate the following yes/no:

Would you recommend this story to others?
Would you be interested in sequels?
Are you interested in other stories by this aurhor?

Comments or recomendations:
 
Last edited:
That would be great for a critique site. This is not a critique site. This site is presented as a place writers of all abilities can post their dirty stories for just sharing, if they like, and they are given some mechanisms to counter readers forcing writing critique on them (as hard as it is for some to understand that "developing their writing skills" is not a motivation for simply everyone posting a story here, nor do they have to be impressed by the unsolicited opinions on that by strangers). Don't count on the Web site implementing critique-site functions until/unless they decide to completely change their Web site mission.
 
I heartily agree with this one. Even without malicious bombing, new stories don't usually have enough votes to give a decent estimate of the long-term score.

The score have nothing to do with the bombing. A lot of right out of the gate new list trolls are category trolls who hit anything in categories they deem 'sick' or someone looking for an author they don't like.

In either case they'll drop a bomb. If someone in Incest as a 3.5 or a 4.6 that troll is bombing them.

I also think the new story lists are visited by people not judging score but looking for something new to read based on category or maybe looking for a catchy title or a certain author.

The only thing immediate bombs do is keep someone from the cursed Red H all are obsessed with, and even then once the story is no longer new the scores do come up when authors fans find it, fans and people reading for the right reasons.

This is a simple scoring system, and again a mostly soft voting crowd.

I don't think the system is a big issue, its people complaining their score is nowhere near what their always amazing stories should warrant.

I feel like I'm channeling JBJ, but Maybe its time for the participation awards to come out. Everyone who posts a story gets a 5 and an H yay, you did it!
 
Each time this comes up (every three weeks?) I float the two seemingly simple and most effective changes to make. Either drop the red H system now that the actual rate is showing on the lists, or add a color marking--4.50 and above "sizzling" and, maybe 4.25 to 4.49 "hot."

Another option would be to replace the red H with a red G (for "Good read!") and give it to any story that scores above 4.0 This would bring the award into alignment with the site's stated scoring criteria (4* = "Really like it—good read!"). This would restore the prestige of a 4* vote, and make it possible to save one's 5* votes for really exceptional stories. Over the long run, this approach might eventually redistribute scores to more fully and transparently utilize the entire scoring range.

Would this be merely a "participation award?" Not really. A good read is a good read. It's something an author can be proud of.

Would this cheapen the award since so many stories would get one? Not necessarily. A good read is a good read. Lots of good reads is not a bad thing,

Would it be helpful to readers? Somewhat. But it would still leave it mostly up to the reader to decide what level of "hotness" they are looking for.
 
Another site I post on tried that. All that ended up happening is 90% of the ratings were the same for all criteria, and therefore no different than a single vote. The system was abandoned because all it did was waste space in the database, and they went back to a single score only.

As a help to authors, it would be nice to have an 'advanced review' option. It would not need to be part of the forward facing rating system, but it could provide useful feedback with a list of questions and room for comments and suggestions.

I've taught technical workshops to tradesmen for many years. At the end of the classes we always ask participants to fill out a review page. It has been instrumental to improving the programs.

It could be ignored or used by any reader and any items could be left blank. Feedback would only be given by those interested enough to take the time -- possibly mostly by other authors. Having so many categories of feedback allows the author to make an assessment of the quality of a review. The story index could have a link to the advanced ratings.


Sample:


Rate the following 1-5:

Over all rating:
Grammer/spelling :
Style:
Character developement:
Scene description:
Flow:
Erotic content:
General story telling:
Dialogue:

Rate the following yes/no:

Would you recommend this story to others?
Would you be interested in sequels?
Are you interested in other stories by this aurhor?

Comments or recomendations:
 
Until the toplists are deleted, all of the scoring is going to be corrupted anyway.
 
Last edited:
Many good points made in this thread, especially by the OP. One tweak of the scoring system that hasn't been mentioned and wouldn't fall under any of Bramblethorn's points but would have the opposite effect of Simon's #11 is to give all chapters of a series the same score (e.g. sum of all votes divided by the number of voters across all chapters). In terms of readers finding good stories, it is not very informative that those who liked the story enough to keep reading really liked chapter 32.
 
8. Your system is computationally expensive and doesn't scale to a site which has half a million stories, probably tens of millions of individual votes, and updates several times a day.

This is more a limitation of the system and hardware, not the idea. What seems like "computationally expensive" today is only one generation of new hardware from being a breeze. Amounts of unstructured data no one would have thought to mung ten years are now done routinely and embarrassingly easily.

My personal view is to go for an IMDB/Rotten Tomatoes style of rating. Not all votes are equal. Anonymous votes, votes by users created in the last day/week have less weightage compared to votes by long term readers. Maybe even give members who have read/voted/commented on many stories "critics" badges and their votes have even higher weightage. Maybe even have authors fit in somewhere. This way, while everyone does get to vote and have their vote counted and ensure "prime votes" count for more.

While there are ways to "game" this system as well (bots, straw voting) and there are ways to detect and resolve them, I doubt it would be worth the effort of any would-be Lit hacker to go THAT far.

Doing this for a few million votes every day is not computationally expensive any more. As far how to classify the existing stories who have scores.. I'd be okay if all the votes given so far remain egalitarian as long as the new votes are weighted as above. Again, this system has precedence with all major review sites out there.

10. Your system is too complicated. Most voters and authors don't understand it and will get angry about it, even if it's otherwise perfect. If this is you, don't be sad, you have a promising future in cricket.

Every South African curses you at this moment :D
 
This is more a limitation of the system and hardware, not the idea. What seems like "computationally expensive" today is only one generation of new hardware from being a breeze. Amounts of unstructured data no one would have thought to mung ten years are now done routinely and embarrassingly easily.

My personal view is to go for an IMDB/Rotten Tomatoes style of rating. Not all votes are equal. Anonymous votes, votes by users created in the last day/week have less weightage compared to votes by long term readers. Maybe even give members who have read/voted/commented on many stories "critics" badges and their votes have even higher weightage. Maybe even have authors fit in somewhere. This way, while everyone does get to vote and have their vote counted and ensure "prime votes" count for more.

While there are ways to "game" this system as well (bots, straw voting) and there are ways to detect and resolve them, I doubt it would be worth the effort of any would-be Lit hacker to go THAT far.

Doing this for a few million votes every day is not computationally expensive any more. As far how to classify the existing stories who have scores.. I'd be okay if all the votes given so far remain egalitarian as long as the new votes are weighted as above. Again, this system has precedence with all major review sites out there.



Every South African curses you at this moment :D

LOL Yes, there's precedence of it being completely wonky and prone to manipulation by both the elite reviewers and the unwashed masses to push their opinions — often in completely opposite directions.

The elites always end up beholden to somebody to maintain/increase their elite status, and the masses form mobs with torches and pitchforks to oppose them. Happens every time a system assigns "value" to one person's opinion over another.
 
LOL Yes, there's precedence of it being completely wonky and prone to manipulation by both the elite reviewers and the unwashed masses to push their opinions — often in completely opposite directions.

The elites always end up beholden to somebody to maintain/increase their elite status, and the masses form mobs with torches and pitchforks to oppose them. Happens every time a system assigns "value" to one person's opinion over another.

IMDB did have a simple average rating system until The Dark Knight topped their list in 2008, crossing both Shawshank Redemption and Citizen Kane, which were review bombed. This is when they realized a completely "democratic" voting system would mean their Top List is dominated by the trend of the day, rather than a meaningful best movie of all time list.

I understand your concerns, but there should be a bit of a meritocracy to this. For instance, if there was a worldwide poll for the greatest sportsperson of all time, I'd love to vote and would hope my vote does well. However, I would also understand that the opinions of sports journalists and former/current sportspersons should count for more than mine.

How it is implemented. On what factors is the weight of the voter considered. How much will a single vote affect a story. These are all discussions to be had. But we should at least agree on the what before delving into the how
 
I understand your concerns, but there should be a bit of a meritocracy to this. For instance, if there was a worldwide poll for the greatest sportsperson of all time, I'd love to vote and would hope my vote does well. However, I would also understand that the opinions of sports journalists and former/current sportspersons should count for more than mine.

Why?

The journalists are bought and paid for, and the players' whole job is competition, which doesn't stop on the field/pitch/etc.

Their opinions aren't worth one iota more than anybody else's. The moment you decide they're more valuable, human nature demands that the "other" will rise up against them. Fine on a micro scale. Doesn't work on a macro scale. The mob will always arise. The elites will look down their noses at them, and even purposely antagonize them.
 
If someone has studied an issue extensively and discussed it a lot and perhaps even professionally dealt with it, of course their opinion on that issue is more valid--because it's better informed--than that of someone who hasn't given it much thought before giving an opinion. Let's not go dumb on this. This is the great fallacy of Internet discussion boards. All who weigh in are equal in access, not in valid, informed opinion.
 
Just let me see the score breakdown

Keep the rating as it is. I would ask that I could see the breakdown of the scores on my author page. ## 1*, ##2*, ##3*, ##4*, ##5* ##Total. This would not be hard on the database.

I do like @AlexBailey's idea for a feedback form, but don't store it in the DB. Just make it a type of PM to the author. We all want feedback, did you not like my story because of something I can fix or do better in the future or is it you only like topic XXX and you read a story of mine that was topic YYY?

I know that if I get 1 or 2 of these types of feedback out of 10,000 I am doing well.
 
IMDB did have a simple average rating system until The Dark Knight topped their list in 2008, crossing both Shawshank Redemption and Citizen Kane, which were review bombed. This is when they realized a completely "democratic" voting system would mean their Top List is dominated by the trend of the day, rather than a meaningful best movie of all time list.

Contrarian position: "best movie of all time" lists are competition for the sake of competition, more about bragging rights than providing useful guidance to audiences. The IMDB top lists are dominated by the tastes of geeky dudes who like movies about dudes in conflict with other dudes, because those guys are disproportionately likely to vote on IMDB. But even if they were truly representative of the average movie viewer's tastes... who here is average?

If one was going to put that kind of effort into Literotica, it'd be more useful to build a recommender system that gives tailored recs based on individual tastes. Which, yes, is quite doable with modern tech, but it's a far cry from the sort of suggestions I usually see on "simple" fixes for the voting system.
 
If someone has studied an issue extensively and discussed it a lot and perhaps even professionally dealt with it, of course their opinion on that issue is more valid--because it's better informed--than that of someone who hasn't given it much thought before giving an opinion. Let's not go dumb on this. This is the great fallacy of Internet discussion boards. All who weigh in are equal in access, not in valid, informed opinion.

You can decide when someone's knowledge makes their opinion more valid than someone else's.

When someone makes that decision for others, the others become an opposition mob.
 
You can decide when someone's knowledge makes their opinion more valid than someone else's.

When someone makes that decision for others, the others become an opposition mob.

Well, not to any great extent can a third party reliably decide this. You (third party) can't just decide for yourself when someone has, in fact, more knowledge of something than another does. It's innate that one of two people has more knowledge on an issue than another one does. It's just a fact. The Internet tries to break this down--as you are doing--into everyone being equal on an issue. They just aren't. The only equality they have is in access to the discussion/argument. Yes, a third party can decide when one has a more valid argument, but that's just guessing on the basis of information made available. That's entirely separate from which of the two, in fact, has the best knowledge on the issue.

I have no idea how your second sentence has anything to do with the issue.
 
Why?

The journalists are bought and paid for, and the players' whole job is competition, which doesn't stop on the field/pitch/etc.

Their opinions aren't worth one iota more than anybody else's. The moment you decide they're more valuable, human nature demands that the "other" will rise up against them. Fine on a micro scale. Doesn't work on a macro scale. The mob will always arise. The elites will look down their noses at them, and even purposely antagonize them.

Baseball writers association of America decide who goes into the hall of fame.

Now are they all ethical and vote 100% on numbers/merit? Of course not, some have grudges against certain players or teams and favor others.

However, if you look at the disgrace of the all star voting in baseball and how people vote based on name recognition leading to big names getting voted in even during horrible seasons while well deserving players are snubbed...that's what you'd get if the Hall of Fame was allowed to be by fan vote.

Professional beat writers and sports journalists know more about the game and the players than the lay people do, even with google's help. These guys meet the players, talk to them interview them, see the best and worst of them.

You can use AGT or American Idol or any talent show that is decided by fan vote as an example

Simon Cowell is a master at recognizing talent, so his vote is worth a lot and will help the best people get through. The point in the show where people vote it turns into popular, who looks better, a sob story...anything but talent.

The masses will never choose better than someone with intimate knowledge of whatever is being judged.
 
Back
Top