Twitter Needs a Public Police Blotter

Twitter’s general policy is to not comment on the actions taken against a specific account deemed to be in violation of its Terms of Service. Occasionally in high profile or controversial cases they’re forced to present clarifications, such as yesterday’s tweets from @Safety regarding Rose McGowan’s suspension.

Screen Shot 2017-10-13 at 11.51.32 AM

Today there exists a trust gap between Twitter’s professed interest in decreasing abuse on the platform and the community’s experience day-to-day using the product. There are lots and lots of blog posts about why Twitter has struggled in this area and plenty of suggestions as to how to revise their product and policies. In fact, two of them are mine — My House, Your House and Don’t Let Abusers @ Name, informed by my longtime on the Twitter platform as a user and my product leadership stint at YouTube. But this next suggestion actually reaches further back in my career to a social creative environment which in some ways was more challenging to manage than Twitter….. the virtual world Second Life.

I had the pleasure of working at Linden Lab, the company behind Second Life, during approximately its first three years of existence. We were a small team – got as large as 30 people during my tenure – and I had the chance to work on product, marketing and whatever other issues became pressing for the startup. We considered the product to be a platform, not a game, and the immense freedom of the virtual environment meant it would be impossible for us to programmatically enforce all our community standards; we were going to have to rely upon user reporting in addition. And the team wanted to make sure that the world felt open, guided by norms and personal preferences rather than a place where we enforced a strict standard of interactions.

This left us with a challenge – how to signal to the users that we cared about the ToS without creating the feeling of a police state, which limited creativity and made us solely responsible (not in a legal sense, but removed the feeling that Second Life was only going to be successful if users took accountability for their actions too)? How to provide a feedback loop while still protecting the identity of individuals on the platform? Since we were creative a virtual world we ended up borrowing a construct from the real one: a crime blotter.

Up until Second Life, the vast majority of online communities, especially massively multiplayer online games, didn’t want to discuss how they handled griefing and misbehavior. They worried that it gave fame (an incentive!) to the griefers – ie an achievement mindset where you’d want to prove you were ingeniously destructive enough to do something that the community managers had to address publicly. One byproduct of this silence was that it eliminated the important feedback loop of a platform’s owners signaling they cared, and thus enrolling the public in maintaining the norms of the platform. Mutual trust.

To combat this within Second Life I reached into my grad school days at Stanford and my bemusement with the police blotter of the local newspaper. The attraction, as you can see below, was initially because Palo Alto was generally so bucolic that tiny little annoyances made the crime reports. But even this helped support the sense of peace and quiet. The neighborhood was so safe that “annoying children” were an investigable report.

palo alto police blotter

So we created the Second Life Incident Reports summary where we summarized the number of violations on the service for a given time period (during the early years we experimented with also noting which server they took place on – sort of the “neighborhood” portion of the Palo Alto blotter). Here’s a snapshot I found online presenting the types of infractions summed over an unknown time period.

sl community

Like many of our design decisions this feature was [pats self on back] quite innovative and I know it influenced other game/community designers down the road. But back to Twitter…

I’m of the belief that Twitter should publish a “This Quarter in Trust & Safety” four times a year. It should summarize some of the features and improvements they’ve made to help keep the platform productive. It should talk about what they’re working on for next quarter. And it should contain some version of the Twitter Police Blotter, which lets us know aggregate numbers of bots taken down, accounts warned, accounts blocked and so on, perhaps even with some categorization for cause. We need to know that our flagging of tweets matters. We need to know that Twitter takes this more seriously than “thoughts and prayers.” And we need to meet in the middle on this – I respect Twitter’s rights to not want to always comment on individual situations and that there can be situational grey areas which require policies to be updated or decisions to be reversed. But sharing more data publicly would be a good step towards making the black box a bit more transparent.