You Can’t Bullshit a Good VC, But You Can MAKE THEM BELIEVE

It’s the season of giving, so when my friend Nathan Bashaw asked for follow-up on my last post, well, I cracked open WordPress to deliver! Nathan wanted my POV on the difference between bullshitting to investors versus telling a BIG VISION CRAZY STORY.

So here we go…..

Bullshitting is telling prospective investors one story – what you think they want to hear and you’re not really committed to – while telling your team another story. Making Them Believe is articulating what your company has the chance to become, even if it’s dependent on lots of hard work and there’s lots of unknown between now and then. Remember, VCs are listening for what could happen if things go right, not wrong.

Bullshitting is building a spreadsheet with assumptions which are all 2-10x better than what the marketplace sees today. Making Them Believe is building a model which starts with actual and/or achievable numbers and shows how you get leverage over time – ie this is a great company, but look at the leverage we get over time which impacts [growth, pricing power, CAC, LTV, etc] and how these additional basis points drop straight to the bottom line.

Bullshitting is fucking with your graphs – scale, timeframe, axis – is order to produce a desired visual. Making Them Believe is walking the VC through the inflection points in your historical data and demonstrating the insights from the journey, the ability to double down when something is working. Bonus points for giving the investor access to raw data itself if they’d like.

Bullshitting is name-dropping potential hires, advisors or puffing up your previous accomplishments in ways that won’t stand up to off-sheet reference chances. Making Them Believe is articulating how you’ve punched above your weight in hiring so far and why you’re going to be a place where the best in the industry will want to work, or how you’re superior in spotting talented people early in their careers and betting on them before others.

Bullshitting is anything which you say only to get funded. Making Them Believe is anything you share that makes people want to fund you.

salesman

Don’t Be This Guy

To Raise a Venture Round These Days, You Need To Be a Little Crazy

None of our portfolio companies seeking additional dollars in 2017 had a “standard” venture fundraise. You know, the one where you advise the company “plan to take 2-3 months to get one or more term sheets and then another month to close.” Zero. Every one of them were feast or famine. 2-4 weeks to multiple termsheets (sometimes under a week!) or 4+ months of meetings and milestones before finding the additional capital (or in one or two cases, *not* finding the additional dollars) they needed.

The startups which took longer were mostly very solid businesses with quality teams. Companies where we would have certainly done our pro rata in a new round. My partner Satya summarized part of our reaction in this tweet:

And while I agree with him on a metalevel, there’s equally an attribute of the companies that raised super quickly which I think this second cohort of companies lacked: the ones with the most competitive raises were CRAZY. They had CRAZY stats or CRAZY vision (or both).

us_promo_ccn_large

What do I mean by CRAZY in this case? Evidence of being an outlier as in “that MoM GROWTH RATE is CRAZY.”

When a VC is seeing dozens of SaaS companies every month, just hitting standard ARR milestones doesn’t get you the term sheet. But coming in with numbers that are 2x everyone else? That gets you noticed. You need to have outperformed, even if it took you a little more capital and a few more months.

If your numbers are solid but not CRAZY, you definitely need a CRAZY vision. You need to be telling a story about what happens if it all works that makes an investor lean forward. You need to have a personal presence which conveys that you are going to put this team on your back and get to victory no matter what. You need to not just be sincere but to have some sizzle, to tell a very good story. For some founders this is uncomfortable – they like staying with the realm of reality. But I’m telling you, embrace the discomfort and TELL THE STORY. It’s not about bullshitting, it’s not about lying, it not about smoke and mirrors but it is about MAKING THEM BELIEVE.

The best thing seed founders can do right now in preparing for a fundraise – and their investors should be helping – is validating whether their numbers are crazy or not. If they’re not, consider whether you want to raise a smaller amount to get further before going out on the circuit. We often work with our seed companies to get them the extra $500k – $1.5m they could use to achieve CRAZY.

And practice practice practice on how you tell the story. Find people you trust – your cofounders, fellow CEOs, your investors – and let them really give you feedback. No mojo, no dollars these days. No VC is going to believe more than you do.

Internet Content Moderation 101

Since Facebook, Twitter and YouTube have all been vocal (to various degrees) about staffing up the human element of their content moderation teams, here are a few things to understand about how these systems typically work. Most of this is based on my time at YouTube (which ended almost five years ago, so nothing here should be considered a definitive statement of current operations), but I found our peer companies approached it similarly. Note, I’m going to focus on user generated/shared content, not advertising policies. It’s typical that ads have their own, separate criteria. This is more about text, images & video/audio that a regular user would create, upload and publish.

good-bad-660x400

What Is Meant By Content Moderation

Content Moderation or Content Review is a term applied to content (text, images, audio, video) that a user has uploaded, published or shared on a social platform. It’s distinct from Ads or Editorial (eg finding content on the site to feature/promote if such a function exists within an org), which typically have separate teams and guidelines for when they review content.

The goal of most Content Moderation teams is to enforce the product’s Community Standards or Terms of Service, which state what can and cannot be shared on the platform. As you might guess, there’s black and white and gray areas in all of this, which mean there are guidelines, training and escalation policies for human reviewers.

When Do Humans Get Involved In The Process

It would be very rare (and undesirable) for humans to (a) review all the content shared on a site and (b) review content pre-publish – that is, when a user tries to share something, having it “approved” by a human before it goes live on the site/app.

Instead, companies rely upon content review algorithms which do a lot of the heavy lifting. The algorithms attempt to “understand” the content being created and shared. At point of creation there are limited signals – who uploaded it (account history or lack thereof), where it was uploaded from, the content itself and other metadata. As the content exists within the product more data is gained – who is consuming it, is it being flagged by users, is it being shared by users and so on.

These richer signals factor into the algorithm continuing to tune its conclusion about whether a piece of content is appropriate for the site or not. Most of these systems have user flagging tools which factor heavily into the algorithmic scoring of whether content should be elevated for review.

Most broadly, you can think about a piece of content as being Green, Yellow or Red at any given time. Green means the algorithm thinks it’s fine to exist on the site. Yellow means it’s questionable. And Red, well, red means it shouldn’t be on the site. Each of these designations are fluid and not perfect. There are false positives and false negatives all the time.

To think about the effectiveness of a Content Policy as *just* the quality of the technology would be incomplete. It’s really a policy question decided by people and enforced at the code level. Management needs to set thresholds for the divisions between Green, Yellow and Red. They determine whether an unknown new user should default to be trusted or not. They conclude how to prioritize human review of items in the Green, Yellow or Red buckets. And that’s where humans mostly come into play…

What’s a Review Queue?

Human reviewers help create training sets for the algorithms but their main function is continually staffing the review queues of content that the algorithm has spit out for them. Queues are typically broken into different buckets based on priority of review (eg THIS IS URGENT, REVIEW IN REAL TIME 24-7) as well as characteristics of the reviewers – trained in different types of content review, speak different languages, etc. It’s a complex factory-like system with lots of logic built in.

Amount of content coming on to the platform and the algorithmic thresholds needed to trigger a human review are what influence the amount of content that goes into a review queue. The number of human reviewers, their training/quality, and the effectiveness of the tools they work in are what impact the speed with which content gets reviewed.

So basically when you hear about “10,000 human reviewers being added” it can be (a) MORE content is going to be reviewed [thresholds are being changed to put more content into review queues] and/or (b) review queue content will be reviewed FASTER [same content but more humans to review].

Do These Companies Actually Care About This Stuff

The honest answer is Yes But….

Yes but Content Operations is typically a cost center, not a revenue center, so it gets managed to a cost exposure and can be starved for resources.

Yes but Content Operations can sometimes be thought of as a “beginner” job for product managers, designers, engineers so it gets younger, less influential staffing which habitually rotates off after 1-2 years to a new project.

Yes but lack of diversity and misaligned incentives in senior leadership and teams can lead to an under-assessing of the true cost (to brand, to user experience) of “bad” content on the platform.

Why Straight-Up Porn Is The Easiest Content To Censor…But Why “Sexual” Content Is Tough

Because there are much better places to share porn than Twitter, Facebook and YouTube. And because algorithms are actually really good at detecting nudity. However, content created for sexual gratification that doesn’t expressly have nudity involved is much tougher for platforms. Did I ever write about creating YouTube’s fetish video policy? That was an interesting discussion…

What Are My ‘Best Practices’ for Management To Consider?

  1. Make it a dashboard level metric – If the CEO and her team is looking at content safety metrics alongside usage, revenue and so on, it’ll prove that it matters and it’ll be staffed more appropriately.
  2. Talk in #s not percentages – These HUGE platforms always say “well, 99% of our content is safe” but what they’re actually saying is “1% of a gazillion is still a really large number.” The minimization framework – which is really a PR thing – betrays the true goals of taking this stuff seriously.
  3. Focus on preventing repeat infringement and recovering quickly from initial infringement – No one expects these systems to be perfect and I think it’s generally good to trust a user until they prove themselves to be non-trustworthy. And then hit them hard. Twitter feels especially poor at this – there are so many gray-area users on the system at any given time.
  4. Management should spend time in the review queues – When I was leading product at YouTube I tried to habitually spend time in the content review queues because I didn’t want to insulate myself from the on-the-ground realities. I saw lots of nasty stuff but also maintained an appreciation for what our review teams and users had to go through.
  5. Response times are the new regulatory framework – I wonder if there’s role for our government to not regulate content but to regulate response time to content flagging. There’s a ton of complexity here and regulations can create incentives to *not* flag content, but it’s an area I’m noodling about.

Hope that helps folks understand these systems a bit more. If you have any questions, reach out to me on Twitter.

Update: My friend Ali added some great best practices on how you treat the content reviewers!

Screen Shot 2017-12-07 at 12.07.23 PM

For VCs, “What Could Go Right” Is More Important Than “What Could Go Wrong”

You ever notice how when someone leads off by saying, “Now, I don’t mean to overgeneralize but…” they almost always are overgeneralizing? Now, I don’t mean to overgeneralize but I want to tell you about something that reporters and pundits frequently get wrong when evaluating the venture-worthiness of a failed startup. They focus on the answer to the question “what are all the things which could have gone wrong here” versus “how valuable would this company have been if things went right?”

VCs are in the business of backing companies that have a substantial chance of failing and the earlier you invest, the more likely you are to see a zero return on your capital. What offsets this is that the successes tend to be outsized, returning 20x, 50x, or even 100x+. The notion that tremendous value is created by a very small percentage of startups, and the financiers behind this businesses are counting on a few of these companies to make up for all the nonperforming investments is called a power law distribution. Heck, Satya and I could talk each other out of *any* investment at the seed stage – there’s always something “wrong” with an opportunity – but our job is to invest, not to *not* invest.

power-law

This means that when a venture capitalist evaluates a startup opportunity they are of course trying to understand all the reasons that the fragile little company could fail, but they’re actually more concerned about “how big can this be if it all works?” Or “what are the ambitions of the team – how do they define success?” There are plenty of very good, very valuable businesses which are still not venture scale. That’s fine — this post isn’t about whether venture is broken for depending on outsized outcomes or the tradeoffs a founder makes when deciding to go down the venture path. No, the point I’m making is that when a venture-backed company fails, it likely wasn’t that their investors didn’t realize the risks upfront but rather they were interested in the upside, not the downside.

Accordingly punditry that just says “OMG, I can’t believe this business got funded when nominally there are so many other ideas out there” or “duh, didn’t the VCs know there was XYZ risk here,” is kinda flat. A richer unpacking would be around whether the bull case actually warranted the capital – was it a reasonable risk to take, not, was there risk.

Of course played out to its extremes this would suggest any investor decisions are beyond reproach because with fuzzy enough math and juiced assumptions you can always make the numbers work on paper. This too would be silly position and we as an industry would lose access to self-reflection and documentation if we cursed at reporters for analyzing our failures. But I’d lay out a framework for understanding the risk/reward analysis that went into an investment, and believe reporting would improve if these were included:

For a failed startup….

  1. Was what caused the failure predictable or novel – ie were the risks ones that a reasonable person could have properly assessed upfront or did they emerge from changes in technology, market, regulation, etc.
  2. Under what assumptions or scenarios were the “venture scale outcome” dependent and how credible/achievable would that be given the degree of difficulty in execution.
  3. The firms which invested – do they typically invest in businesses with similar risk profiles and have they succeeded notably, or was the firm either stretching into a new area and/or hasn’t proved yet to be astute assessors of risk/reward.

Of course this is difficult information for a reporter to gather and assess, especially in a “must publish now” culture. So I’m going to suggest something perhaps a little atypical: if a trusted reporter is doing good and fair analysis, the investors should be willing to chat on background about their decision making. Never to the betrayal of a founder’s confidence – we’re talking about post-mortems, not companies in motion – and not about specifically placing execution blame on anyone, but in “here’s what we were thinking and here’s what was right or not about that.” I’m positive many firms do versions of this internally, comparing back to their investment memo for the deal and updating their frameworks for the vertical.

Now, I don’t mean to overgeneralize but I think that could be good for founders, good for the community and good for the press to have that level of conversation with an investor.

Five Posts I Would Have Written If Someone Else Hadn’t

Flying back to San Francisco, spending some time in my Pocket since I rarely read anything when it’s actually published. Here are a bunch of posts that I enjoyed, most VC or startup related, so if you don’t care about that stuff, skip this.

The Angel’s in the Details – Andy Dunn, Walmart (Bonobos cofounder/CEO)

Andy is a ‘wears it on his sleeve’ type of guy and his writing is always passionate and personal. This post reminded me how much I hate when I hear a leader of any type say something like “well, you can’t expect me to know everything that’s going on” or “that happened below my paygrade” versus accepting responsibility.

Six Ways Great Companies Use Board Decks to Their Advantage – Union Square Ventures blog

Solid meat and potatoes post about good board decks. Board meetings can be some of the best discussions and informative sessions if founders and Directors focus on using them correctly. At early stage companies specifically they’re not about just managing your investors or putting on a show. And they’re not about making a CEO jump through hoops and burn a week of productivity prepping for a presentation. At Homebrew, we believe in great boards early.

Ruling Out Rather Than Ruling In – Jerry Neumann (angel investor)

Jerry is very thoughtful – and thought-provoking. He uses frameworks kinda like we do — not to prevent exceptions but to know when he’s making them. Here Jerry outlines how he evaluates an investment.

Part of a VCs Job is Making it on to The List – Christian Hernandez, White Star Capital

“The List” is different based on your stage and investment strategy but it holds true generally. In a competitive, power law industry, it’s not enough to have a checkbook, you need to be a preferred partner to some set of constituents. At our seed stage, much of the good dealflow is “dark” (ie not shopped broadly) so Homebrew needs to be on The List for a subset of founders and coinvestors who want to preference us.

What Founders Really Want From VCs – Fred Destin

When it comes down to it, there’s lots of “nice to haves” but in Fred’s opinion the MUST HAVES are actually pretty clear – do you have my back, are you insightful, can you help me recruit/close, and can you help me get funded.

 

 

What I’ve Learned When I Ask For Feedback

Something NEW and TRUE. Every time I’ve asked for feedback from those around me in a structured format I’ve received a gift. A learning that was previously unknown to me (NEW) and, even if I want to deny it, 100% correct (TRUE). My first N&T arrived when I was in grad school as part of a semester long T-group with a dozen or so of my classmates. Here I learned about a way I was unintentionally creating resentment by making people feel judged and discarded. By acknowledging and understanding these reactions I was able to improve myself.

The second N&T emerged during a 360 Degree Assessment that I received upon making Director at Google. You know what I heard? That I was actually a pretty shitty manager of people who had different communication styles and motivations than I did. So again, I took it to heart and evolved (and also made sure I had managers in my org who were better at this than I would ever be). Always a work in progress you know….

group-of-meerkats-mob

My third N&T was delivered earlier this month as part of a feedback exercise that Satya and I did for Homebrew. We had a third party coach reach out to 36 of the CEOs we’d backed and about a dozen co-investors. She had conversations with everyone around a set of goals and expectations for how we seek to assist companies and build relationships with teams (which we provided to them in advance to make best use of time). As we approach Homebrew’s 5th anniversary next year, it was great to get structured feedback from our customers (the founders we back) and ensure that our roadmap in the years to come is tuned even more specifically to helping them succeed. We’re thankful to all the people who took time out of their schedules to help us.

Satya and I are still digesting the aggregated and anonymized data (and we shared a summary with our LPs) but mostly feel great about what we heard back, with a few areas to work on. My N&T this time was actually something more positive than I’d expected — that our founders, by and large, knew I cared about them as human beings, not just as investments. I’ve always struggled with a feeling that my fondness for people didn’t translate, that I felt more transactional to them than truly committed. Part of this has been due to my introversion which sometimes causes to me to not show up at events or disappear suddenly, and part is just my manner, which tends to be more emotionally restrained. But I feel deeply about my friends and relationships so I’m glad this has started to come through more tangibly.

So if anyone *hasn’t* experienced structured feedback from their peers, colleagues or customers, I’d strongly recommend figuring out how to experience this. It’s a wonderful way to get out of your own head and confront truths on how you’re impacting others.

Rafat Ali on Media Startups and the Nature of Venture

“1) VC money is not evil. 2) VC money is not sustainable.

Those are not contradictory statements.”

Those are Rafat Ali’s words from a post he wrote Friday in reaction to the cascade of bad news for a bunch of venture-backed new media companies. While Rafat’s expertise is concentrated in media (he started and sold Paid Content and now runs vertical travel startups Skift), the statements above apply generally (we’re investors in theSkimm and Cheddar, so clearly believe there is a place for venture here).

The most challenging aspect of taking venture capital is that it’s difficult to step off of the venture train once you’ve taken too much money, or negotiated too high a valuation, or gone too deep on executing a plan that requires high burn ahead of profitability. But complaining about that or assuming it’s fundamentally “evil,” is like getting married and then complaining it makes dating other people difficult. You knew what the ring meant when you put it on.

stack-of-magazines

Side note: I generally enjoy Rafat’s perspectives on media, technology and culture. Here’s a 2016 podcast he did with Recode’s Peter Kafka that’s worth a listen.