Postulate is the best way to take and share notes for classes, research, and other learning.
Tonight's Accelerate session centered around the concepts of uncertainty and risk. Not understanding, or misunderstanding, these concepts often leads to time and effort wasted due to over- or underoptimization. Two key ideas and a matrix combining them provide a basic framework for understanding what risk is, and how to deal with it.
First, let's talk about uncertainty. Broadly, an uncertainty is something that you don't know. An AIM Institute video breaks down the certainty-uncertainty spectrum into four buckets, making up the acronym FAQS:
Facts: "we know what we know." This is information that we are confident is true.
Assumptions: "we know what we think." This is information that we are not confident is true, but choose to operate on the basis of it being true based on the evidence we have.
Questions: "we know what we don't know." Questions represent information that is desired that we're not confident enough to assign a fact or assumption to.
Surprises: "we don't know what we don't know." At the maximum of uncertainty, surprises represent information that we don't know, desire, or expect, but that are true nonetheless.
These buckets lie on a spectrum from full certainty to complete uncertainty. Specific uncertainties won't always fall neatly into these buckets, of course; assigning them to one bucket is a decision in and of itself. But the framework offers a highly practical method for reducing uncertainty: identify what pieces of information roughly fall into different buckets, and take action to move them towards more certain buckets.
It doesn't make sense to reduce uncertainty for the sake of reducing uncertainty, however.
"There are billions of uncertainties, but you don't have billions of items on your risk sheet," thought leader David Hillson says in a video. Selecting risks to worry about is a process of choosing what uncertainties you care about, and that's exactly Hillson's concise defition: "risk is uncertainty that matters."
With this definition, we now have two axes for assessing uncertainty: how certain or uncertain a consideration is, and how much of an impact it can have. We can construct a matrix like this:
The implication here is straightforward: don't worry about uncertainties that don't matter. As a simple illustration of this, Joshua brought up two examples in the discussion today: the possibility of the sun exploding, and the possibility of a friend surprising you with a cake. The impact of the sun exploding is huge, but the uncertainty is low: we can be fairly confident that the sun is not going to explode anytime soon. On the other hand, the uncertainty of a friend giving you a cake is high, especially if there's a convenient reason for them to do so, i.e. around your birthday or some other holidays. But the impact is fairly low: we'll enjoy the cake if we receive it, but either way our lives won't be hugely disrupted. Thus, neither of these uncertainties are risks that we should spend much effort focusing on. If there was some reason the sun might explode, though, and we didn't have the evidence to be confident that it wouldn't, then we better start gathering more information and building some deep, deep self-sustaining bunkers.
Instead of risk in the negative sense, Nadeem steers discussion towards thinking about meaningful impact as the potential for positive impact. Uncertainty without upside is simply recklessness, Nadeem says. We can update our matrix accordingly:
An important consideration is that it's up to us to decide what to consider having a positive impact. Eesha shares this idea at the beginning of the call, before Nadeem joined: "you need to set objectives before you can evaluate risk."
An implication of this is that it's also within our control how many uncertainties we have to worry about. If we simply scope down our objectives, tons of uncertainties stop mattering, or at least matter less. Nadeem targets this insight at TKS kids who care about too many things, getting stuck optimizing for uncertainties that don't actually come with the potential for large positive impact.
Many kids spend a ton of energy optimizing for getting into college, for example. If we were to instead break this idea down into the underlying benefits we hope getting into college will have, i.e. making a decent amount of money, entering a certain career field, we might find that there are plenty of less selective schools or alternatives to college altogether that would be equally or even more effective. The point is not that optimizing for college specifically is a bad idea, but that we optimize for tons of things, and take on tons of uncertainties, when thinking more about what we're actually optimizing for and limiting said forms of impact that we care about would save us a ton of time and energy.
Notes from TKS sessions, speakers, etc.