Heat Detector is quite famous networkwide for its ability to detect rude and abusive comments. However the problem that we are facing at hand is that not many people are feeding back accurately (The bot utilizes ML and feedback is very very important for it's improvement). This issue can be easily solved by having a web dashboard.
Unlike CopyPastor, which had to be written from scratch, the dashboard for HD can be easily forked from Sentinel. The similarities between Natty and HD are:
- Both are related to flagging (comment vs answer)
- Both have various reasons for detections
- Both need feedback
Hence we can remod Sentinel and create a similar web dash board for HD. As none of us know Ruby, we'll need to trouble Art again.
Changes needed on the HeatDetector side:
- A proper/strict rule to differentiate between what should be a true positive and what should be a false positive.
- A way to provide feedback through a command, so that a userscript could be made use of.
- And of course, calling the dashboard in every report.
- Framing the strong rules to differentiate between a heated argument and not is often subjective. Quantizing this in someway in order to improve the algorithm would be the first challenge. (This is not related to the dashboard development work, but the way we provide feedback).
- Few of the posts might be borderline, and we wouldn't want to feedback these. Hence a way to "not provide feedback" would be needed.
We can host it on the sobotics webserver. That would not be an issue.
I've thought of a few, which are closely related to Heat Detection.
Inferno
Sprinkler
I love inferno :D since we will fill the dashboard with bad stuff, "Oh gosh, I missed that. Stupid me." is a false positive (problem mostly related to a combination between Perspective and Regex), feeding these comments as good comment probably can avoid the problem.
Giving correct feedback to Heat Detector is fundamental since it mainly depends on Machine Learning, the problem with comments are that they are "Subjective", We have posted some meta to find a threshold, but with no direct result.
Currently before adding the feedback to the model I need to manually review'em, I think we will need a third feedback that is "so so bad", this way the cristal clear rude we can add directly to feed.