I love flagging spam on Twitter. And reporting ToS violations on Etsy. And spammy comments on WordPress blogs. Basically I’m the guy willing to help pull weeds and fix broken windows to maintain order in my community. Having worked on abuse tools at YouTube I know how vital user flagging is to enforcing community guidelines within these platforms. But the vast majority of these systems fail in one important way: they never close the feedback loop with the user who reported the issue.
No “Thanks Hunter. Because of your help, we’ve closed 124 spam Twitter accounts”
Never a “Hunter, the comment you flagged on 10/24/14 has been removed. Thanks for keeping YouTube a positive community!”
Somehow the gamification trend bypassed community support tools. The arguments against providing these type of responses usually come down to:
a) spammers will use this information to reverse engineer flagging algorithms
b) if a flagged piece of content isn’t removed, the person who reported will be pissed so why rub it in their face
Neither of these ring very true for me. Sophisticated spammers already have plenty of ways to test spam/abuse detection systems using bots, APIs, etc. And there are clearly ways to provide generalized feedback to the flagging user without reporting back decisions on an item by item basis. And for (b), I’m more pissed feeling like my flags are just being thrown into a black hole. Plus, feedback helps not just motivate but also trains the reporting user.
So my POV is the best community support systems include a feedback look for the users who proactively report violations. I believe Secret does something like this for flagging violating secrets. It’s time the larger platforms did similar.
Update: yup, here’s what Secret does
Facebook does this