One bombed

I just wish they would give us all the data rather than a single number.

A score of 3.4 with 100 votes can mean two entirely different things:
Great story: 60 5s and 40 1s.
Crap story: 10 5s, 30 4s, 50 3s, 10 2s.
This is what Amazon and several other on-line buying sites do, but it still won't give an author much usable information relative to intentional low voting.

The difficulty here as I see it is that Laurel has always been strongly supportive of the right to publish what an author wants to write and readers want to read, and it's likely she feels the same way about voting. I think the really low votes on an otherwise higher ranking story do get tossed, usually prior to the announcement of the winners of a contest. I have experienced this several times because I track the number of votes for each of my stories over time. What I've seen is a story with an OK rating suddenly jump up by 0.01 or 0.02 and I'll also see a corresponding decrease in the total number of votes.

As for any other method, how would the site determine if a "1" or "2" vote is the legitimate opinion of the voter or an attempt to knock a story down a few hundredths of a point? Maybe the answer is to only allow votes between "2" and "5"? No, wait, there are votes of "5" obviously cast by followers who would vote a "5" for anything their favorite author wrote. Maybe the site should only allow votes between "2" and "4". Then some authors would complain that the "2" votes were malicious votes. So, the site changes to only allow votes between "3" and "4". That's really no different from voting "like" and "dislike". In the end, the votes would probably be split about 50-50. The fix would be as bad or worse than the perceived problem as far as giving authors some idea of how well their readers like what they write.
 
As for any other method, how would the site determine if a "1" or "2" vote is the legitimate opinion of the voter or an attempt to knock a story down a few hundredths of a point?
Yeah, I did not mean to imply that my method would help with detecting malicious voting. I guess implicit in my suggestion is that I don't care about malicious voting. If we have a five number scale, I want to know how many 5s and 4s I got. That's the number of people whose lives were infinitesimally better by my work.

If you really want a solution to THAT problem (not that it would be implemented), it already exists. It's called the Elo System. In Elo, every "player" has a difficulty rating. Getting a 2 from a rater who gives out lots of 1s gets you more rating than getting a 4 from a rater who gives out lots of 5s. There are a million variations, but here you could readily also weigh by genre. In other words, if your BDSM story gets high ratings from Romance readers, that counts for even more, etc.

However, I doubt such a thing would ever be implemented here, since the site owners don't seem that concerned about any of this anyway. :)
 
Yeah, I did not mean to imply that my method would help with detecting malicious voting. I guess implicit in my suggestion is that I don't care about malicious voting. If we have a five number scale, I want to know how many 5s and 4s I got. That's the number of people whose lives were infinitesimally better by my work.

If you really want a solution to THAT problem (not that it would be implemented), it already exists. It's called the Elo System. In Elo, every "player" has a difficulty rating. Getting a 2 from a rater who gives out lots of 1s gets you more rating than getting a 4 from a rater who gives out lots of 5s. There are a million variations, but here you could readily also weigh by genre. In other words, if your BDSM story gets high ratings from Romance readers, that counts for even more, etc.

However, I doubt such a thing would ever be implemented here, since the site owners don't seem that concerned about any of this anyway. :)
I don't think it's a lack of concern. I think it's more a lack of resources. Since the vote totals can change quickly, I suspect that's an automatic feature of the software and that votes and user names aren't correlated.

To implement what you state would require a database of every single vote ever cast by every voter in every genre. It would have to be tracked by something other than a user name since it's relatively easy have multiple user names including the infamous "Anonymous". At one time, there was a theory that a group of large group college students were setting up different user names on every computer they could find and then using those computers and user names to "down vote" or "up vote" a particular story or a particular author's works.Then, for every vote cast, some algorithm would have to "correct" the actual vote based on the database data.

Currently, there are 500,000+ stories published on Literotica. If the average number of votes per story is only 100, that's 15 million data entries (user name or other identifier, vote, genre) and a lot of older stories have over 1000 votes. I don't know how anything but a huge, dedicated server with some pretty powerful computing massaging the votes could ever keep up.

If you check out some other erotica sites, what you'll find is that no matter what rating system they use, there are still the same complaints of not enough information and malicious voting on both ends of whatever scale the site uses. Literotica's rating system is definitely not perfect, but there are a lot of more important rating type activities in the world that humans can't seem to get right no matter what the cost. We shouldn't expect a free erotica site to be perfect no matter what the cost.
 
@ronde I totally agree. As I said, I think the 1s are valid votes. All I would like is two things. First, the capacity to see the full distribution, not just the average. Heck, even being able to see the median and mean would be helpful. The other is the capacity to see that distribution by reader category: follower, followed, author, anon.

For me, it would make the 1 bombing I get totally irrelevant.
 
There are primarily two ways that ratings can be useful. One, they can provide feedback to the writer to let them know how well the audience they chose liked their story. If only those who liked it give feedback, the writer gets a falsely inflated impression of their success. Perhaps you've heard the phrase "Don't fix what ain't broke." Well, inaccurate feedback stops you from fixing what is broken, because you don't know that it's broken.

The second use is to inform other readers. Used properly, ratings reflect how well the combination of title, description, and category matches up with what the people clicking on that particular combination were looking for. Thus, the next person who is tempted by the combination can look at the rating and anticipate how likely it is they'll actually like the story. A falsely high rating can trick more people into clicking and being disappointed. I know some writers think views is the ultimate goal, but I disagree.

Now, of course, different people have different opinions. But, the average of honest ratings over time accounts for that. The average of dishonest ratings does not. Ratings will never be an objective measure, but we can strive to make them a useful subjective measure.

Finally, as you don't seem to understand how ratings are inflated, let me give you a simple example. Let's say that you and one other person read a story. They loved it. You hated it. In an honest world, they'd give it a 5, you'd give it a 1, so it would end up with a 3* average. Now, in this world where you abstain from voting below 4*, they give it 5, you give it nothing, and it ends up with a 5* average.
That 1 set of interpretations/ opinions. Perfectly valid.
Not the only ones.
I agree some not all.
 
Back
Top