Should Regulators Force Facebook To Ship a “Start Over” Button For Users?

I don’t really understand most of the calls to “regulate” Facebook. There are some concrete proposals on the table regarding political ads and updating antitrust for the data age, but other punditry is largely consumer advocacy kabuki. For example, blunting the data Facebook can use to target ads or tune newsfeed hurts the user experience, and there’s really no stable way to draw a line around what’s appropriate versus not. These experiences are too fluid. But while I want keep the government out of the product design business, there’s an alternate path which has merit: establish a baseline for the control a person has over their data on these systems.

Today the platforms give their users a single choice: keep your account active or delete your account. Sure, some expose small amounts of ad targeting data and let you manipulate that, but on the whole they provide limited or no control over your ability to “start over.” Want to delete all your tweets? You have to use a third party app. Want to delete all your Facebook posts? Good luck with that. Nope, once you’re in the mousetrap, there’s no way out except account suicide.

redbutton425

BUT is that really fair? Over multiple years, we all change. Things we said in 2011 may or may not represent us today. And these services evolve – did we think we’d be using Facebook as a primary source of news consumption and private messaging back when you were posting baby photos? Did you think they’d also own Instagram, WhatsApp, Oculus and so on when you created accounts on those services? We’re the frogs, slow boiling in the pot of water.

What if every major platform was required to have something between Create Account and Delete Account? One which allows you to keep your user name but selectively delete the data associated with the account? For Facebook, you could have a set of individual toggles to Delete All Friend Connections, Delete All Posts, Delete All Targeting Data. Each of these could be used individually or together to give you a fresh start. Maybe you want to preserve your social graph but wipe your feed? Maybe you want to keep your feed but rebuild your graph.

Or for Twitter: Delete All Likes, Delete All Tweets, Delete All Follows, Delete All Targeting Data.

Or for YouTube: Delete All Uploads, Delete All Subscriptions, Delete All Likes, Delete All Targeting Data.

The technical requirements to develop these features are only complicated in the sense of making sure you’re deleting the data everywhere it’s stored, otherwise every product already support “null” state – it looks very much like a new account. This leads me to believe that the only reason these features don’t exist today are (a) it would be bad for business and (b) actual or perceived lack of consumer demand. Anecdotally, it feels like (b) is changing – more and more people I know wipe their tweets, talk about deleting their histories, and so on. Imagine the ability to stage a “DataBoycott” by clearing your history if you think Facebook is taking liberties with your privacy and such. This is what keeps power in check.

So regulators, you want to help consumers? Don’t prevent tech companies from building the best products they can. Instead require them to consistently provide an escape hatch by giving their users the ability to START OVER without having to fully delete their accounts.

Request For App: Calls I Need To Make

Here’s what I want.

The ability to add a phone number/contact to a list. With one-touch dialing from that list entry.

The ability to set variables at the contact/# level that are either persistent or apply only to this call. For example, time zone or priority, or expected call duration (how long I need for the call).

Default is for the list to be sorted “manually” (probably reverse chron of entry – Last In, First Out) but also apply “smart sort” based upon an amount of time I have to make calls before next meeting, what part of day it is in each contact’s home time zone, and so on.

After making this prioritization decision, I want to be able to press “Play” and have the app automatically call the first number. If it connects, we start talking. If it doesn’t connect, it moves to next number in my list (figure there’s also a “skip” command).

Maybe there’s a feature that texts the person first to say, “Hunter is available to chat, reply Y if you’d like him to call you” or something like that.

Basically it’s a To Do App that’s optimized solely for phone calls.

Does anything like this exist?

DonAdams

2018’s Word of the Year: Coalition

Screen Shot 2017-12-21 at 11.09.01 AM

Type “coalition” into Google Trends and you get a disturbing result. A 12-year decrease in interest. It’s anecdotal but easy to also map this graph to the increase in political and societal tribalism. Moving away from the idea we can work together even if some of our beliefs are in conflict.

Screen Shot 2017-12-21 at 11.06.25 AM

“We might not agree on everything but can we agree to work together on this?” seems like a pretty powerful and potentially effective solution. One that most of us learn informally during childhood. “Politics makes strange bedfellows” is another idiom I recall hearing often when studying governance (turns out its origins are from Shakespeare’s Tempest).

Going into the next year perhaps we can all be a little more open to finding the common ground in our relationships.

What I Think We’re Talking About When We’re Talking About What We Can’t Talk About

It’s no longer worth it to vocalize controversial beliefs. Silicon Valley has become a PC echo chamber. I can’t say what I think without fear of reprisal.

These are not convictions I personally hold but ones which I’ve heard expressed with increasing volume from people I know well and people I don’t know as well, in public spaces and in private conversations. Often these sentiments are voiced by 30 – 50 year old white men (and women) of economic privilege. I say this not to discredit their feelings or observations but because (a) it does seem to be relevant and (b) that’s the group which dominates my own social circles, which means that my POV is constrained by limitations in perspective.

But obviously since you’re reading this, I felt confident enough I had something to say that I’m wading into this conversation. Not to dissect a blog post. Not to provide sufficient evidence that anyone is right or wrong in their assertion. And certainly not to call out any one person in particular. Rather, here’s my grand unified theory as to why this We Don’t Tolerate Unpopular Beliefs Any Longer feeling exists.

Tech is No Longer the Underdog And We Still Haven’t Fully Grokked What Power Means.

The oversimplified historical founding myth of the technology industry was that a critical mass of nerds found themselves in SV and built a beautiful meritocracy, where good ideas and data win the day. And where wealth was almost a bug not a feature – a byproduct of being right, rather than the goal itself. Of course much of this is false but it’s powerful. Fast-forward to modern day and you have an industry amassing tremendous amounts of power and money which hasn’t yet fully come to grips with these circumstances – the responsibility, the gravitas. So we can see ourselves as well-intended underdogs while the reflection in the mirror is no longer as simple. It’s sorta like when people complain that it matters what the President says – that there are no throwaway lines when you have that role.

Information Broadcast Means More People Have Voices and We’re More Often Speaking to People Who Don’t Know Us.

Maybe segments of the population always had strong reactions to controversial ideas but those people (women, non-whites, the poor) didn’t have a microphone. Perhaps nothing has changed other than giving voice to a broader set of the population? This is a good thing by the way and does lead to the increased exposure and examination of racism, classism, misogyny, and so on. Some of those “unpopular ideas” are just plain wrong, lead to real harm for people and work only to preserve an existing power structure (which actually *constrains* innovation versus allowing new ideas, voices and people to rise).

Additionally, broadcast technologies allow us to reach larger groups of people than ever before. People who often don’t know the speaker or who are receiving a snippet, our of context. I’ve had words of mine ‘blow up’ in communities of people who don’t know me – it sucks and doesn’t feel great, but in some ways it’s the tradeoff for using these tools right now. If the tools and our desire to use them constructively continue to evolve, it’ll get better (hopefully).

There’s an Outrage Economy

Because there’s a surplus of content but finite attention, one currency of these broadcast platforms is emotion, and outrage is a strong magnet. “RT With Comment!!!!” SNARK GETS LIKES. Any white male tech worker who fucks up is either a “Google Executive” (when he was really just a middle manager) or a “Tech Bro.” It’s weary, it’s tiresome, it’s unnecessarily broad and it divides people. Try not to participate.

Opinions Are Like Assholes. And Groups of Assholes Are Your Tribe.

Since when do you need to have an opinion or be an expert on everything? Sometimes you need to STFU. But of course one of the best things about the internet is the ability to find a critical mass of people who think the same way you do. So having opinions actually increases the surface area of your ability to be part of a group, to be accepted, to feel secure. These are very basic human needs and emotions. So there’s opinion inflation where it feels better to have one and find your tribe.

You’re Supposed to Be Willing to Take Heat for a Belief. 

Ok, unlike the previous concepts this one isn’t specifically linked to the technology industry but somewhere along the way we wanted there to be zero cost to have an unpopular belief but I’m not sure zero cost is optimal. Maybe a bit of friction is what forces you to consider why people disagree with you? Maybe a bit of friction helps you prioritize what’s worth your time and energy? Maybe there are hills worth dying on and hills that aren’t even worth scaling? Oh wait – hot take: because social spaces have been primarily built for us to react with support (LIKE, FAV, THUMBS UP), we’re now skittish and soft when it comes to DISLIKE, THUMBS DOWN. There, I knew I could find a tech angle.

Now, let me transition to the responsibilities I think we each have as part of the SV community.

Softer Eyes

When someone says something you disagree with or you see a tweet that sounds dumb as fuck, what if you read the whole thing in context? Or assumed the person behind the idea isn’t a terrible racist, SJW, tech bro, whatever but a person. A work in progress. Someone who probably has some redeeming qualities too. Doesn’t mean you need to engage them. Doesn’t mean you need to be their buddy. Doesn’t mean you need to tolerate despicable beliefs, but let’s try to separate limited worldviews or naïveté from truly horrible individuals. [note: I’m a bit of a hypocrite here because ideas such as “not all Trump voters are racists but they didn’t find his racist beliefs or enablers to be disqualifying for their support” personally resonate with me. So yeah, it’s an aspirational responsibility but tough to implement fully.]

You Don’t Have to Fucking Talk About Everything to Everyone

You’re not a fucking expert about everything. Maybe sometimes it’s ok to listen, to read, to evolve and experience versus pontificating. Communication is listening, not speaking. Your desire to share might be rooted in your own desire for attention not always some joyous quest for knowledge or intellectual rigor. “I tweeted something dumb and now people are mad at me.” The problem might not be the second half of the sentence.

We Can Disagree About Many Things and Still Be Friends

Yeah, it’s possible. We don’t have to be ideological perfect matches in order for me to work with you, respect you, or be interested in your ideas.

Do the Work to Understand Why You Might Be Wrong

It’s so healthy to ask someone why they believe what they do – not because you’re looking for a way to attack them to win the argument but because you want to inhabit their eyes for a moment. Present your point of view and ask them where they believe you’re wrong or why they feel differently. Never assume your truth is an unqualified truth.

There. This is what I think we’re talking about when we’re talking about what we can’t talk about.

You Can’t Bullshit a Good VC, But You Can MAKE THEM BELIEVE

It’s the season of giving, so when my friend Nathan Bashaw asked for follow-up on my last post, well, I cracked open WordPress to deliver! Nathan wanted my POV on the difference between bullshitting to investors versus telling a BIG VISION CRAZY STORY.

So here we go…..

Bullshitting is telling prospective investors one story – what you think they want to hear and you’re not really committed to – while telling your team another story. Making Them Believe is articulating what your company has the chance to become, even if it’s dependent on lots of hard work and there’s lots of unknown between now and then. Remember, VCs are listening for what could happen if things go right, not wrong.

Bullshitting is building a spreadsheet with assumptions which are all 2-10x better than what the marketplace sees today. Making Them Believe is building a model which starts with actual and/or achievable numbers and shows how you get leverage over time – ie this is a great company, but look at the leverage we get over time which impacts [growth, pricing power, CAC, LTV, etc] and how these additional basis points drop straight to the bottom line.

Bullshitting is fucking with your graphs – scale, timeframe, axis – is order to produce a desired visual. Making Them Believe is walking the VC through the inflection points in your historical data and demonstrating the insights from the journey, the ability to double down when something is working. Bonus points for giving the investor access to raw data itself if they’d like.

Bullshitting is name-dropping potential hires, advisors or puffing up your previous accomplishments in ways that won’t stand up to off-sheet reference chances. Making Them Believe is articulating how you’ve punched above your weight in hiring so far and why you’re going to be a place where the best in the industry will want to work, or how you’re superior in spotting talented people early in their careers and betting on them before others.

Bullshitting is anything which you say only to get funded. Making Them Believe is anything you share that makes people want to fund you.

salesman

Don’t Be This Guy

To Raise a Venture Round These Days, You Need To Be a Little Crazy

None of our portfolio companies seeking additional dollars in 2017 had a “standard” venture fundraise. You know, the one where you advise the company “plan to take 2-3 months to get one or more term sheets and then another month to close.” Zero. Every one of them were feast or famine. 2-4 weeks to multiple termsheets (sometimes under a week!) or 4+ months of meetings and milestones before finding the additional capital (or in one or two cases, *not* finding the additional dollars) they needed.

The startups which took longer were mostly very solid businesses with quality teams. Companies where we would have certainly done our pro rata in a new round. My partner Satya summarized part of our reaction in this tweet:

And while I agree with him on a metalevel, there’s equally an attribute of the companies that raised super quickly which I think this second cohort of companies lacked: the ones with the most competitive raises were CRAZY. They had CRAZY stats or CRAZY vision (or both).

us_promo_ccn_large

What do I mean by CRAZY in this case? Evidence of being an outlier as in “that MoM GROWTH RATE is CRAZY.”

When a VC is seeing dozens of SaaS companies every month, just hitting standard ARR milestones doesn’t get you the term sheet. But coming in with numbers that are 2x everyone else? That gets you noticed. You need to have outperformed, even if it took you a little more capital and a few more months.

If your numbers are solid but not CRAZY, you definitely need a CRAZY vision. You need to be telling a story about what happens if it all works that makes an investor lean forward. You need to have a personal presence which conveys that you are going to put this team on your back and get to victory no matter what. You need to not just be sincere but to have some sizzle, to tell a very good story. For some founders this is uncomfortable – they like staying with the realm of reality. But I’m telling you, embrace the discomfort and TELL THE STORY. It’s not about bullshitting, it’s not about lying, it not about smoke and mirrors but it is about MAKING THEM BELIEVE.

The best thing seed founders can do right now in preparing for a fundraise – and their investors should be helping – is validating whether their numbers are crazy or not. If they’re not, consider whether you want to raise a smaller amount to get further before going out on the circuit. We often work with our seed companies to get them the extra $500k – $1.5m they could use to achieve CRAZY.

And practice practice practice on how you tell the story. Find people you trust – your cofounders, fellow CEOs, your investors – and let them really give you feedback. No mojo, no dollars these days. No VC is going to believe more than you do.

Internet Content Moderation 101

Since Facebook, Twitter and YouTube have all been vocal (to various degrees) about staffing up the human element of their content moderation teams, here are a few things to understand about how these systems typically work. Most of this is based on my time at YouTube (which ended almost five years ago, so nothing here should be considered a definitive statement of current operations), but I found our peer companies approached it similarly. Note, I’m going to focus on user generated/shared content, not advertising policies. It’s typical that ads have their own, separate criteria. This is more about text, images & video/audio that a regular user would create, upload and publish.

good-bad-660x400

What Is Meant By Content Moderation

Content Moderation or Content Review is a term applied to content (text, images, audio, video) that a user has uploaded, published or shared on a social platform. It’s distinct from Ads or Editorial (eg finding content on the site to feature/promote if such a function exists within an org), which typically have separate teams and guidelines for when they review content.

The goal of most Content Moderation teams is to enforce the product’s Community Standards or Terms of Service, which state what can and cannot be shared on the platform. As you might guess, there’s black and white and gray areas in all of this, which mean there are guidelines, training and escalation policies for human reviewers.

When Do Humans Get Involved In The Process

It would be very rare (and undesirable) for humans to (a) review all the content shared on a site and (b) review content pre-publish – that is, when a user tries to share something, having it “approved” by a human before it goes live on the site/app.

Instead, companies rely upon content review algorithms which do a lot of the heavy lifting. The algorithms attempt to “understand” the content being created and shared. At point of creation there are limited signals – who uploaded it (account history or lack thereof), where it was uploaded from, the content itself and other metadata. As the content exists within the product more data is gained – who is consuming it, is it being flagged by users, is it being shared by users and so on.

These richer signals factor into the algorithm continuing to tune its conclusion about whether a piece of content is appropriate for the site or not. Most of these systems have user flagging tools which factor heavily into the algorithmic scoring of whether content should be elevated for review.

Most broadly, you can think about a piece of content as being Green, Yellow or Red at any given time. Green means the algorithm thinks it’s fine to exist on the site. Yellow means it’s questionable. And Red, well, red means it shouldn’t be on the site. Each of these designations are fluid and not perfect. There are false positives and false negatives all the time.

To think about the effectiveness of a Content Policy as *just* the quality of the technology would be incomplete. It’s really a policy question decided by people and enforced at the code level. Management needs to set thresholds for the divisions between Green, Yellow and Red. They determine whether an unknown new user should default to be trusted or not. They conclude how to prioritize human review of items in the Green, Yellow or Red buckets. And that’s where humans mostly come into play…

What’s a Review Queue?

Human reviewers help create training sets for the algorithms but their main function is continually staffing the review queues of content that the algorithm has spit out for them. Queues are typically broken into different buckets based on priority of review (eg THIS IS URGENT, REVIEW IN REAL TIME 24-7) as well as characteristics of the reviewers – trained in different types of content review, speak different languages, etc. It’s a complex factory-like system with lots of logic built in.

Amount of content coming on to the platform and the algorithmic thresholds needed to trigger a human review are what influence the amount of content that goes into a review queue. The number of human reviewers, their training/quality, and the effectiveness of the tools they work in are what impact the speed with which content gets reviewed.

So basically when you hear about “10,000 human reviewers being added” it can be (a) MORE content is going to be reviewed [thresholds are being changed to put more content into review queues] and/or (b) review queue content will be reviewed FASTER [same content but more humans to review].

Do These Companies Actually Care About This Stuff

The honest answer is Yes But….

Yes but Content Operations is typically a cost center, not a revenue center, so it gets managed to a cost exposure and can be starved for resources.

Yes but Content Operations can sometimes be thought of as a “beginner” job for product managers, designers, engineers so it gets younger, less influential staffing which habitually rotates off after 1-2 years to a new project.

Yes but lack of diversity and misaligned incentives in senior leadership and teams can lead to an under-assessing of the true cost (to brand, to user experience) of “bad” content on the platform.

Why Straight-Up Porn Is The Easiest Content To Censor…But Why “Sexual” Content Is Tough

Because there are much better places to share porn than Twitter, Facebook and YouTube. And because algorithms are actually really good at detecting nudity. However, content created for sexual gratification that doesn’t expressly have nudity involved is much tougher for platforms. Did I ever write about creating YouTube’s fetish video policy? That was an interesting discussion…

What Are My ‘Best Practices’ for Management To Consider?

  1. Make it a dashboard level metric – If the CEO and her team is looking at content safety metrics alongside usage, revenue and so on, it’ll prove that it matters and it’ll be staffed more appropriately.
  2. Talk in #s not percentages – These HUGE platforms always say “well, 99% of our content is safe” but what they’re actually saying is “1% of a gazillion is still a really large number.” The minimization framework – which is really a PR thing – betrays the true goals of taking this stuff seriously.
  3. Focus on preventing repeat infringement and recovering quickly from initial infringement – No one expects these systems to be perfect and I think it’s generally good to trust a user until they prove themselves to be non-trustworthy. And then hit them hard. Twitter feels especially poor at this – there are so many gray-area users on the system at any given time.
  4. Management should spend time in the review queues – When I was leading product at YouTube I tried to habitually spend time in the content review queues because I didn’t want to insulate myself from the on-the-ground realities. I saw lots of nasty stuff but also maintained an appreciation for what our review teams and users had to go through.
  5. Response times are the new regulatory framework – I wonder if there’s role for our government to not regulate content but to regulate response time to content flagging. There’s a ton of complexity here and regulations can create incentives to *not* flag content, but it’s an area I’m noodling about.

Hope that helps folks understand these systems a bit more. If you have any questions, reach out to me on Twitter.

Update: My friend Ali added some great best practices on how you treat the content reviewers!

Screen Shot 2017-12-07 at 12.07.23 PM