Hey there, hope you have a great week. Enjoy the newsletter.
Articles to Read.
We knew we were taking risks when we skied. We knew that going out of bounds was wrong, and that we might get caught. But at 17 years old we figured the consequences of risk meant our coaches might yell at us. Maybe we’d get our season pass revoked for the year.
Never, not once, did we think we’d pay the ultimate price.
But once you go through something like that, you realize that the tail-end consequences – the low-probability, high-impact events – are all that matter.
In investing, the average consequences of risk make up most of the daily news headlines. But the tail-end consequences of risk – like pandemics, and depressions – are what make the pages of history books. They’re all that matter. They’re all you should focus on. We spent the last decade debating whether economic risk meant the Federal Reserve set interest rates at 0.25% or 0.5%. Then 36 million people lost their jobs in two months because of a virus. It’s absurd.
Tail-end events are all that matter.
Once you experience it, you’ll never think otherwise.
Change happens slowly, then all at once.
What’s going to be newsworthy by the end of the year is not technology companies saying they’re embracing distributed work, but those that aren’t. Those who thought this couldn’t work have been forced by the pandemic to do it anyway, and they’ve now seen that it’s possible.
It was probably terrible at first, but now two or three months in it’s gotten better. We’ve learned and adapted, and will continue to do so. Necessity breeds invention. I promise you if you stick with it, you’ll progress through the levels of distributed autonomy. Over time people will be able to move houses, tweak furniture, buy equipment, upgrade their internet, and otherwise adapt to being more productive in a distributed environment than they ever could be in an office. Products and services are being developed all around the world that will make it even better. I’m so excited about how a majority of the economy going distributed will improve people’s quality of life, and unlock incredible creativity and innovation at work. (They go hand in hand.)
At some point, we’ll break bread with our colleagues again, and that will be glorious. I can’t wait. But along the way we’ll discover that things we thought were impossible were just hard at first, and got easier the more we did it. Some will return to physically co-working with strangers, and some employers trapped in the past will force people to go to offices, but the illusion that the office was about work will be shattered forever, and companies that hold on to that legacy will be replaced by companies who embrace the antifragile nature of distributed organizations.
“In life the challenge is not so much to figure out how best to play the game; the challenge is to figure out what game you’re playing.”
One great portfolio manager I know told the story of being driven somewhere by an analyst on a rainy night when a truck swerved and almost ran them off the road. “Why is stuff like this always happening to me?” the analyst instinctually responded. But to the portfolio manager, that response reflected a terrible mindset, whether on the road or in the market: a sense that the world is acting on you as opposed to your acting on the world. It is a mindset that is hard to change. But from what I’ve seen, great investors don’t have it. Instead, they’ve come to understand which factors in the market they can control and which factors they cannot.
One way to relocate your locus of control is to frame investing (and even life more generally) as a game. This allows you to experience luck as luck, to separate the hand you drew from the playing of that hand. As David Milch, the creator of the HBO show Deadwood, put it, he realized late in life that “it’s the way you learn to play the cards you’ve been dealt, rather than the hand itself, that determines the worth of your participation in the game.”
We live at a time of universal polymathy. We don’t know everything, but there’s not much difficulty in being able to discover any given truth. But it’s worth remembering just how hard it used to be to find things out. Thirty years ago if you wanted to research off your own bat it meant a trip to the public library — and perhaps filling out a form for an inter-library loan. Or you could try your luck in a bookshop, new or secondhand. The whole process took a long time, and most people stayed within their professional competence or enthusiasm, frankly admitting to ignorance outside those limits.
These days, thanks to the internet, research takes just a moment, though it may be grossly inaccurate. A reliable book on the subject can be downloaded in seconds or an out-of-print volume ordered. Whether you can understand the argument once you’ve read it is another matter, though you can always pretend. The past few weeks have turned many of us into amateur epidemiologists with decided views on R numbers, while lovers of political theories of control happily bandy about what Giorgio Agamben said about ‘the state of exception’. And there are countless hobbyists now learning to cook, sew, speak Arabic or master the art of origami (I am learning the mandolin.) Even in the very recent past this would have been impossible — and we tended to be left in awe of Maynard Keynes’s knowledge of economics and post-impressionist painting, for instance, or of Gladstone’s expertise in theology.
So one must conclude that the person who achieves true distinction in many disciplines is someone different. Goethe genuinely advanced fields of scientific inquiry such as geology and colour theory; Nabokov is always said to have been an eminent entomologist. Leonardo da Vinci, naturally, is an obvious candidate, with his speculative drawings about engineering projects, though Michelangelo (strangely not mentioned by Burke) was probably just as successful a polymath, achieving masterpieces of the first rank in painting, sculpture, architecture and poetry. Beyond a handful of freaks such as these, we find a lot of experts who dabbled in something else — and we are left trying to admire the paintings of Churchill and Strindberg or the novels of C.P. Snow. If Gladstone had been just a theologian, nobody would have given him a second thought.
What's the right approach to new products? Pick three key attributes or features, get those things very, very right, and then forget about everything else. Those three attributes define the fundamental essence and value of the product -- the rest is noise. For example, the original iPod was: 1) small enough to fit in your pocket, 2) had enough storage to hold many hours of music and 3) easy to sync with your Mac (most hardware companies can't make software, so I bet the others got this wrong). That's it -- no wireless, no ability to edit playlists on the device, no support for Ogg -- nothing but the essentials, well executed.
By focusing on only a few core features in the first version, you are forced to find the true essence and value of the product. If your product needs "everything" in order to be good, then it's probably not very innovative (though it might be a nice upgrade to an existing product). Put another way, if your product is great, it doesn't need to be good.
Making the iPad successful is Apple's problem though, not yours. If you're creating a new product, what are the three (or fewer) key features that will make it so great that you can cut or half-ass everything else? Are you focusing at least 80% of your effort on getting those three things right?
Today’s software apps are like appliances: we can only use the capabilities exactly as programmed by the developer. What if we, and all computer users, could reach in and modify our favorite apps? Or even create new apps on the fly according to our needs in the moment?
This is end-user programming, a vision for empowered computing pursued by bright-eyed computer science visionaries. Its rich history reaches back to the 1960s with programming environments like Smalltalk and Logo. Notable successes since then include Unix, the spreadsheet, Hypercard, and HTML. And today, newcomers like Zapier, Coda, and Siri Shortcuts are trying their own approaches to automation and dynamic modeling.
But despite forty years of commercial products, open source, and deep academic work, we have yet to reach an end-user programming utopia. In fact, the opposite: today our computing devices are less programmable and less customizable than ever before.
This article tackles the question of why this is so.
In all seriousness, I used to think that making predictions that turned out embarrassingly wrong was bad for your image, but I’ve slowly begun to realize that most people, and especially devout followers, don’t really care.
Why? Because it is so easy to rationalize the failed predictions after the fact without causing any long-term reputational damage.
Despite my cynicism throughout this post, there is a silver lining in all this. That silver lining is that people don’t care about your failed predictions because they don’t really care about your failures.
The world isn’t watching you as closely as you think. Take more chances. Your failures will mostly go unnoticed. And those who do notice rarely care.
When people don’t like your work, the response isn’t negativity, but silence.
More to Check Out:
- Organ Transplants Down As Stay-At-Home Rules Reduce Fatal Traffic Collisions
- US Birthrates fall to record lows
- So much of academia is about connections and reputation laundering
- The State of Internal Tools
- The case for national paid maternity leave
The days are certainly blurring (working -> running -> reading/watching -> repeat). How are you bringing spontaneity into your life? I turned 23 last week.