cute-baby-girls

Twitter Shouldn’t Ban Trump, But It Could Show Him Baby Pictures Before He Tweets

The Twitter President. Farhad Manjoo writes that while Twitter has the right to ban Trump, they shouldn’t, or at least not based on his content to date. Currently I agree with Farhad – Trump uses the platform irresponsibly  and without full care of the implications of what he says, but he doesn’t cross the “ban” line. [Related: Twitter has an opportunity to generally rethink what’s acceptable on their platform, and if they do, Trump’s tweets qualify for great scrutiny.]

But Twitter DOES have the chance to impact Trump in another way: perhaps it could help him understand the implications of his actions, or at least force him to make his way through some friction before he tweets. This isn’t Trump specific – in general, perhaps there are ways to influence the behavior of highly flagged accounts pre-banning. Maybe it looks like “kindness training.”

The challenge: how would you design an online training or in-app experience which encourages people to be nicer and reshapes their view of what’s appropriate? Would you prime them by showing pictures of cute babies in-between tweets? Would you give them a 10 screen training of how words can impact others negatively which they need to complete every day? Would you have an interstitial which asks “Would you say this in person to someone? If not, don’t tweet it?” A countdown timer that makes someone pause for 30 seconds by locking them out of the compose tweet screen during the midst of a tweetwar?

I can’t yet support exorcising @realdonaldtrump from the platform, but I do wonder if thinking about how to influence his behavior could give Twitter another vector in how they deal with abuse generally. Besides, I do enjoy the idea of his stubby little fingers having to swipe through a baby picture slideshow at 3am before launching a Twitter rant.

“Why You Bought That Ugly Sweater” & How Stores Trick You Into Spending

I’ve always been fascinated by the science of sales, especially in a retail setting. “Why We Buy” by Paco Underhill is one of my favorite quick reads on the subject. You’ll never look at a store the same way again!

The Atlantic did a round-up of research into how retailers try to part us from our money. Here are some of my favorite findings:

  • “we perceive prices to be lower when they have fewer syllables and end with a 9”
  • “one recent study found that, compared with friendly salespeople, rude clerks caused customers with low self-confidence to spend more and, in the short term, to feel more positively toward an ‘aspirational brand'”
  • “when a customer who feels badly about her appearance tries something on and spots an attractive fellow shopper wearing the same item, she is less likely to buy it”
  • “One paper now under peer review shows that cooler temperatures indoors lead to a more emotional style of decision making, while warmth contributes to a more analytical approach”
  • “One study found that popular music leads to impulsive decisions, while lesser-known background music leads to focused shoppers”

“Why We’re Terrible At Reading Faces – Yet Quick To Judge Them”

“And yet, as bad as we are at reading expressions, we jump to all kinds of conclusions based on people’s faces.”

Paul Ekman did truly ground-breaking work into microexpressions, the nearly imperceptible changes in our faces that register pleasure, disgust and so on. Love this collection of academic studies via The Atlantic, summarizing some telling research into how we react to faces, expressions and related visual cues. Some of the most thought-provoking:

  • “People were ready to decide whether an unfamiliar face should be trusted after looking at it for just 200 milliseconds.”
  • “Another study reported that jurors needed less evidence to convict a person with an untrustworthy face”
  • “In another, when people watched silent videos of the same person experiencing pain and faking pain, they couldn’t tell which was which. A computer was correct 85 percent of the time”

 

my-so-called-life

Update to “My So-Called Virtual Life: The Assistants Which Power Hunter Walk”

A year ago I wrote about the virtual assistant products that had found their way into my life. Since the category was VERY hyped up in 2015, and seemed to cool a bit this year (from an adoption/investment perspective), here’s how my usage trends have changed over the 365 days.

Fancy Hands: STILL my go-to task-based assistant. I don’t use FH for meeting scheduling but instead rely upon them for a variety of requests. Recent ones include:

  • Submitting information to my insurance company around policy changes
  • Arranging car service when I’m traveling (and Uber isn’t the best solution for some reason)
  • A list of SF Holiday ballet and dance performances appropriate for kids to attend
  • Martial arts classes in SF that fathers and daughters can attend together
  • List of private chefs who specialize in preparing meals for recovering cancer patients (for a friend)

Facebook Messenger’s M: Held steady but narrowed. M isn’t currently suited for complex tasks where research or judgment impacts quality. M also won’t touch anything it deems medical related, so no scheduling doctor appointments or even checking if a prescription is ready for pickup from the pharmacy. That said, I use M to schedule hair cuts, restaurant reservations and similar requests where a call and information submission or retrieval is needed. I often queue these up pre-normal business hours and then M will address once these businesses are open.

Wonder: I use Wonder for b2b’ish research but they’re really good for any type of research question where you could imagine a subject expert needing 15-30 minutes to pull you together an answer. Quality can really vary but they’ll redo a project if you find the results insufficient. Use this URL to get yourself $15 off a task: https://askwonder.com/r/hunterwalk

GetService: Solves customer service issues for you. I don’t have these often but when I do, I turn to Service first. They just resolved a disputed hotel charge for me from my minibar with very little effort on my part. Still a free service.

What am I missing? Are there awesome virtual assistant services you use?

pleo-10

“If Animals Have Rights, Should Robots?”

I guess Westworld has made this a hot topic, but even better (or at least shorter) is this article “If Animals Have Rights, Should Robots?

It turns out that, for a host of reasons the author covers, we feel moral regret when we cause or observe pain, even if the recipient can’t feel that pain, such as a robot.

At one point, a roboticist at the Los Alamos National Laboratory built an unlovable, centipede-like robot designed to clear land mines by crawling forward until all its legs were blown off. During a test run, in Arizona, an Army colonel ordered the exercise stopped, because, according to the Washington Post, he found the violence to the robot “inhumane.”

Or consider this experiment involving the Pleo, a “lifelike” robot dinosaur.

pleo-10

In an experiment that Darling and her colleagues ran, participants were given Pleos—small baby Camarasaurus robots—and were instructed to interact with them. Then they were told to tie up the Pleos and beat them to death. Some refused. Some shielded the Pleos from the blows of others. One woman removed her robot’s battery to “spare it the pain.” In the end, the participants were persuaded to “sacrifice” one whimpering Pleo, sparing the others from their fate.

Of course the ultimate issue isn’t that we fear the robots are going to become sentient and revolt but rather “The problem with torturing a robot, in other words, has nothing to do with what a robot is, and everything to do with what we fear most in ourselves.”

How We Almost Gamified Copyright Infringement Detection on YouTube (& Ideas for Fake News)

Over the past decade YouTube has constructed one of the most efficient and useful copyright management systems [dramatic tone] EVER CREATED BY MAN. The company works closely with rights holders of all sizes to identify fan-uploaded clips which may contain third party video or audio assets, and presents a set of simple business choices – take the infringing clip down (or “mute” the music, if it’s just audio infringement); monetize the clip on behalf of the rights owner; or just track the clip but take no action. The path of least resistance would have been to build a DMCA Takedown Engine but instead the talented team at YouTube solved an enormous challenge in a much more productive manner.

While this system was in its infancy, a group of us were brainstorming creative approaches to copyright management outside of this more complex design. One engineering lead started riffing on an idea that has always stuck with me: build a version of Hollywood Stock Exchange (HSX), but for copyright violating content. HSX is a site where you can trade movies (and actors & actresses) like stocks — the value is related to the box office dollars. So, for example, if you think the next Star Wars movie is going to be a huge hit, you’d “buy” virtual shares in the movie and sell at a higher price once it breaks all attendance records. At scale HSX may also function as a prediction market if you believe the participants as a whole are creating an efficient view into the enthusiasm (or lack thereof) the viewing public will have for a film.

Here’s what we back-of-the-napkin’ed for YouTube:

  1. When a video was uploaded into the system, its “IPO” price would be determined by a number of algorithmic factors such as performance of other videos from that account, the type of content it contained, the mix of referral sources and so on.
  2. Over time, a video’s “price” directly correlated to its view count. So if you saw a video with a low viewcount that you thought was going to go “viral,” you’d want to “buy” it in hopes that its value would skyrocket once it became popular.
  3. However, if a video was eventually removed from the system because it was infringing on someone else’s copyright, the “price” would go to zero. And in the process, you’d lose all the virtual currency you had invested in that video.
  4. Participants (registered YouTube users) would start with a slug of virtual currency with which to start their video portfolio and you’d go from there.

Using this mechanic, a video with a viewcount that was increasing quickly but did not have many buyers *should* signal there was a risk associated with the video being removed. Just as YouTube relied upon its community to proactively flag content which violated the site’s ToS around nudity, violence, and so on, perhaps this game mechanic could assist in the signaling of what *may* be copyright infringement. Basically we talked about this once or twice, usually with alcohol involved, but never implemented.

[note: there are a number of reasons we didn’t want to be in the business of proactively reviewing content for copyright determination. YouTube’s practices around the DMCA were upheld by courts. Also hold aside questions around whether this game would result in incentives to spam the viewcount of videos you “owned,” etc.]

Sooooo, Facebook and Fake News.

There are lots of smart discussions online these days about whether (and how) Facebook should evolve to help counter fake news from spreading on its platform. Most ideas involve some combination of URL blacklists, machine learning as to what fake content looks like and user flagging. It’ll be interesting to see what Zuck and team choose to do – and I think they have a responsibility because it’s directly linked to the trust many users have in Newsfeed.

But how would you turn identifying Fake News on Facebook into a game? Would there be a way to gamify fact-checking? Would be much more difficult in some ways than the YouTube scenario because Fake News seems to get stuck within echo chambers where its consumers *want* it to be real and may otherwise be disincentivized to play the adjacent metagame.

Any ideas?