Skip to content

Instantly share code, notes, and snippets.

@aredridel
Last active March 15, 2017 21:22
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save aredridel/80d5c2102cbf7113e1f694d689612e90 to your computer and use it in GitHub Desktop.
Save aredridel/80d5c2102cbf7113e1f694d689612e90 to your computer and use it in GitHub Desktop.
twitter abuse prevention?

Know how the network clusters. Then when someone's reported, see how the clusters relate. Finding the source isn't too many hops. That'll help find the inciteful players -- the Milos, for example. It won't find people who organize in another medium, but are unrelated on Twitter. But second order analysis of who piles on connects them. Another mode of clustering.

In either case, be more suspicious based on (network) distance.

Then on the product design side: Make a way to separate users, and their first order follows. You report someone & computation checks it out as from a far cluster, and especially if it can find an inciting event? Just block those mentions. Like don't even let the tweet be posted. Gonna mention someone's username? Then you gotta not be a jackass.

It's reactive, mostly automated, but it takes reports seriously. It can eliminate the pile-on effect, especially if you run the algorithm proactively when someone's rate of mentions goes way up.

Also rate-limit non-conversational mentions by mentionee, and conversational ones by mentioner. Try to even out what clusters of people they're coming from. Suddenly the Milo effect is greatly reduced, and other, kinder mentions can get in. It optimizes for diversity of network, which I think is a great proxy for diversity of ideas.

Then we gotta work on Twitter as a reading platform as the solution to the onboarding problem: Give people great conversations to read. Give people something to discover. Tune what to suggest to them as they join networks. Take off the training wheels as they start participating in global conversation. Deemphasize posting in this mode, so that reading is the easiest thing rather than responding, especially if conversations are distant and/or old -- like network distance, distance in time to the thing being responded to is suspect.

Then on top of this, the icing on the cake would be to actually analyze social power and apply a feminist filter to the whole thing, so that instead of treating each cluster equally, they get a bit of weight based on just how much shit they get. And shit they get is in fact a measurable thing in this system: we've a heap of reports, and validation of those reports by correlating them to inciting events.

So how do you deal with egg accounts being abusive? Tie them back to other users on the system. Correlate loosely -- by IP, or by cookies present on the computer -- and keep this private, like absolutely private to Twitter. But feed that in to help detect hidden members of a cluster.

@emceeaich
Copy link

Another way to do it beside network distance is how the poster arrived at the subject tweet. Nora Reed's twitter honeypots are discovered by trolls through search. If the tweet being responded to was found through search, there's a non-zero chance it's a troll.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment