Even well-intended developers will try to see if they can game/exploit a system, so we MUST assume this will happen.
-
It is impossible to have accurate counting, so we can just as well keep everything as simple as possible.
-
No ranking overviews! This is useless data anyways and will make people less inclined to game the system. Only show download counter for individual libs and a graph over the versions/days. This is useful to author and others, as you can see increase/decrease overall, without having to be super-accurate.
-
Add ‘rating’ with required comment on why a rating is X. This rating can be shown in search result list to sort by. But again, NEVER in an overall ranking list. Would have loved to go for 1-2-3 rating, but that would mean most libs get a 2. Need to work on the labels, so that reviews get the most meaningful rating that others can relate to. E.g. a user rates it with a ‘1’ and comments: “crashes while uploading files”. Other user looks at review and thinks: “well, I can ignore this review, because I won’t upload files and the other reviews look positive”.
- 1 = crashes (in production apps)
- 2 = works, but unstable/design issues
- 3 = usable
- 4 = good implementation
- 5 = perfect implementation, used in production many times
-
API throttling on IP address. Depending on what Heroku has and has experience with, but, for instance, NGinx has a module especially for this.
-
Keep record in DB of each ping plus IP address and timestamp. This way we can always go back and check if there might still have been some gaming going on by e.g. showing graphs per IP address. But this is all NON public.
This is indeed a much simpler design than the original spec. Glad to see that pared down.
I'm in full agreement about being vigilant against metrics being a vehicle for gamification. Ranking overviews on systems like Rubygems only serves to belittle the utility of niche libraries, which I believe to have a negative impact on library authors' willingness to contribute (besides, GitHub already provides a sufficient level of implicit social and code quality metrics).