Sometimes You Just Have to Compute
“How will it turn out?”
“Will play out as I expected?”
“What is the best decision to make here?”
We can ask several questions about how some event will play out in the future. It usually pays off to obtain more information before making any decision 1. But sometimes, the degree of uncertainty is such that you can never be fully sure of how something will pan out. Sometimes, you just have to “compute.”
In this context, I refer to “compute” as conducting a series of steps to get to a result 2. For example, if we are given the multiplication 45678534736 times 5395839463, we do not know the answer from the back of our minds. We need to carry out a series of steps (aka compute) to get the answer.
The “sometimes you just have to compute” expression turns out to represent not only an empirical observation but also a proven truth. Granted, this is for computers and digital problems in particular, but the result is sobering, as Charles Petzold describes in his book Code:
What Turing also demonstrated is that there are certain algorithmic problems that will forever be out of reach of the digital computer, and one of these problems has startling implications: You can’t write a computer program that determines if another computer program is working correctly! This means that we can never be assured that our programs are working the way they should.
This is a sobering thought, and it’s why extensive testing and debugging are so important a part of the process of developing software. 3
Perhaps it is incorrect to extend Turing’s logical proofs related to computers and algorithms to our personal experience 4. But if we entertain the not-so-far fetched-idea that our brains have a computational capacity (just like computers) and that we can significantly enhance it, we are left with a similar conclusion: we can never be sure how something will turn out in the future. It doesn’t matter how much capacity to compute, cleverness, intelligence, or whatever synonym we can find to describe the brain’s capacity. Sometimes, you just have to compute.
Every endeavor then, carries a risk in that we cannot know for sure how it will turn out. And the only way to find out is to go and do it, whatever that is.
-
“(you can almost always improve your odds of being right by doing things that will give you more information). The expected value gain from raising the probability of being right from 51 percent to 85 percent (i.e., by 34 percentage points) is seventeen times more than raising the odds of being right from 49 percent (which is probably wrong) to 51 percent (which is only a little more likely to be right). … Raising the probability of being right by 34 percentage points means that a third of your bets will switch from losses to wins. That’s why it pays to stress-test your thinking, even when you’re pretty sure you’re right.” From Principles by Ray Dalio. Source. ↩
-
From Code by Charles Petzold. He is referring to what we call Turing’s Halting Problem. ↩
-
I would argue though, the “jury is still out” on this debate. In particular after all of the work Stephen Wolfram has done to formulate the world in computational terms. See his TED Talk for more information on this. ↩
Linked mentions
From Turn Left, Right, Up by Sondry.