I was having dinner with two entrepreneurs. Both are smart and successful, two qualities that don’t always go hand in hand.
One (we’ll call him Ben) thinks an emerging technology will transform his market within the next 18 months. “We’re going to drop everything we’re developing and put all our money on (that),” he said. “We’re going all in.”
In financial terms, it’s easily a $10 million bet. But “all in” is kind of Ben’s thing.
The other (we’ll call him Frank) seemed skeptical. “How certain are you?” Doug asked.
“Okay,” Frank said, and dumped twenty peanuts onto the middle of the table. (I didn’t say we were having a fancy dinner.) “Imagine one of these peanuts is worth $10 million. You only get to pick one. Would you rather take your chances on choosing the right peanut, or that you’re right about your project paying off in 18 months?”
“Shoot,” Ben said. “Definitely my project.”
Frank took away ten peanuts.
“How about now?” he said.
Ben pointed at his own chest. “I’m still betting on me.”
Frank pushed more peanuts aside so only five were left.
Ben sat back. He crossed his arms. His eyes narrowed. He was clearly less certain. Eventually he said, “Still me.”
Frank took away two more peanuts. Now Ben had a one in three chance of picking the $10 million peanut.
Ben sat quietly for about thirty seconds, mental file cards clearly fluttering. Finally he pointed to the peanuts. “I think I’ll take those odds,” he said.
What happened? Ben calibrated his prediction. Instead of just thinking he was right, the exercise forced him to think thought about how he thought about making predictions. (Psychologists call thinking about thinking “metacognition.“)
Thinking about thinking — in this case, seeking to calibrate his prediction — is something Ben freely admits he rarely does. He’s smart. He’s decisive. He’s often right. Over time, he’s come to trust his instincts.
In this case, his gut told him he would be proven right, and he was willing to back that belief with millions of dollars.
But then Frank’s peanut game — an exercise called the equivalent bet test, popularized by decision-making expert Douglas Hubbard — made him realize he felt more comfortable with one out of three odds than he did with his prediction.
“Very certain”? Not so certain after all.
The beauty of the equivalent bet test is that forces you out of binary mode — either “I think (this) will happen” or “I don’t think (this) will happen” — and allows you to fine-tune a prediction. Ben’s true level of confidence fell somewhere around 33 percent.
In some cases that might be a good bet. In others, not.
While he sees the situation as a “when,” not an “if,” still: the stakes of being too early are too high. So he scaled back the investment. He ready to quickly double down if it looks like his prediction will be right, but he’s also prepared if the timeline turns out to be longer.
Calibrating his prediction — realizing his knowledge and experience allowed him to turn a coarse judgment into one much more nuanced and fine-grained — not only improved his decision-making, it helped mitigate the risk involved.
That’s the real beauty of the equivalent bet test: Determining how confident you really are in a prediction and using that knowledge to make even better decisions — and plans.
Because you can’t always be right.
But you can determine how confident you are that you will turn out to be right. That an employee will react to a decision the way you think. That customers will react to pricing changes the way you think. That expanding your product line will turn out the way you think.
And then plan accordingly.
Because even the biggest risks should still be intelligent risks.