Bramblethorn
Sleep-deprived
- Joined
- Feb 16, 2012
- Posts
- 16,929
That's fair. This approach would honestly work best with a panel of judges instead of having readers perform the rating. I think your points, and @Bramblethorn's above, are valid.
Where I was coming from is that I don't find the current rating system very informative. There's not a very strong correlation between e.g. writing craft and scores a lot of the time (though there's some correlation, to be sure). And there's just no way to distinguish how arousing readers have found a story or how good the plot is. Sometimes I want to read stories for different purposes, and the all-in-one model just doesn't help much in my searches.
That's entirely valid frustration. It's just an inherent difficulty in human-subjects research (which is effectively what we're doing here): we want All The Information, but we have to get it from people who will just walk away if we exhaust their very limited patience.
Sometimes, rather than trying to squeeze more information out of those people and ending up with less, the best solution is to try to get more value out of whatever info they did give us.
If you have a bunch of readers who value "heat" highly, and a different bunch who value "plot" instead, that will create a pattern in their ratings that can be used: you can identify that there are two different groups of readers, and Group A likes these stories but Group B prefers those stories. When a new reader comes along, you can use their own ratings to figure out which of those groups they belong to and make recommendations accordingly.
This is basically how businesses like Amazon work; they know there's a limit to how much they can get out of their customers ratings-wise, so they put a lot of work into analysing the paltry info they do get.