When Several Explanations Seem to Fit, How Do You Decide Which One to Act On?

When Several Explanations Seem to Fit, How Do You Decide Which One to Act On?

Occam’s Razor, Bayesian thinking, questionclass, decision, obvious

Choosing the most useful story when the truth isn’t fully visible yet

Big-picture framing

When several explanations seem to fit, your brain begs for a clean story. But the real problem isn’t “What’s true?”—it’s “What should I actually do next?” In work, relationships, and strategy, acting on the wrong explanation can quietly waste months. A better approach is to treat explanations as hypotheses instead of truths, using simple tools like Occam’s Razor, light Bayesian thinking (updating your beliefs as new evidence shows up), and small reversible experiments. That way, multiple explanations stop being a dead end and become a structured way to learn faster.


Why Multiple Explanations Feel Paralyzing

When something goes wrong, your mind instantly generates stories:

  • “The market changed.”
  • “The strategy was flawed.”
  • “The team was misaligned.”

All of them might be partly true. The problem is that you can only act on one first.

It’s like a car that won’t start: battery, fuel, wiring, starter—all plausible. If you replace the wrong thing, you’ve spent effort without fixing the real problem. I’ve seen teams do the same thing at scale: spend six months rolling out a new tool, only to realize the real issue was unclear priorities, not software.

So the real question becomes: When several explanations fit, which one deserves your next move?


A Practical Way to Pick an Explanation to Act On

1. Evidence and reversibility

Start by ranking your explanations on:

  • Evidence – What specific signals support this story? What would I expect to see if it were true, and do I actually see that?
  • Reversibility – What’s the smallest, cheapest action I could take to test it?

Good candidates to act on first are explanations where:

  • The evidence is at least reasonably strong, and
  • You can probe them with a small, reversible experiment (a pilot, an A/B test, a limited trial) instead of a big, organization-wide bet.

2. Cost of being wrong

Next, consider the downside:

  • If I act on this explanation and it’s wrong, how bad is it?
  • If I ignore it and it’s right, how bad is that?

Prioritize explanations where ignoring them could be costly or dangerous—especially the ones you secretly don’t want to be true. This is why safety, ethics, or security explanations often get tested early even when they’re not the most likely: the cost of being wrong is too high to ignore.


Occam’s Razor, Bayesian Thinking, and a Necessary Counterpoint

Occam’s Razor says: when several explanations fit the facts, prefer the simplest one that still explains the evidence. In practice: don’t invent politics, conspiracies, or secret master plans if “we never wrote this down clearly” fits the facts just as well.

But simplicity isn’t everything. Bayesian thinking adds an important twist:

Start with a rough gut ranking of what’s most likely, then bump explanations up or down as new evidence shows up.

Informally, you’re asking:

  • “Given how things usually fail here, which explanations start out more likely?”
  • “After seeing this data, which ones just got stronger or weaker?”

The counterpoint to Occam’s Razor is that real systems can be messy. Rare, complex causes sometimes matter a lot (black swans, cascading failures, culture issues). A simple story can feel satisfying yet ignore crucial, less visible factors. So you’re not worshipping “simple”—you’re looking for explanations that are:

  • Simple enough to be actionable
  • Consistent with the evidence
  • Continuously updated as you learn

Real-World Example: Strategy, Burnout, or Market?

Imagine sales have dropped for two quarters. Plausible explanations:

  1. The market has gotten more competitive
  2. The strategy is unclear
  3. The team is burned out

You walk through the filters:

  • Evidence: Deals are mostly being lost in one specific segment → points toward a strategy or positioning issue, not universal burnout.
  • Reversibility: You can test sharper positioning in that segment for one month without changing everything else.
  • Cost of being wrong: If burnout is real and you ignore it, you pay later—but you can watch for warning signs (surveys, 1:1s, attrition) while you test strategy.

So a sensible move might be:

Act first on “strategy is unclear in segment X,” run a focused experiment for a month, and keep gathering evidence on market shifts and burnout in parallel.

You haven’t declared the other explanations false; you’ve just chosen the best next bet.


Turning Ambiguity into Experiments

When multiple explanations all seem to fit, you don’t need omniscience—you need a repeatable process:

  1. List 3–5 plausible explanations explicitly instead of letting them swirl in your head.
  2. Give each a rough starting likelihood (your informal Bayesian prior): based on history, which usually happens here?
  3. Score each on evidence, cost of being wrong, and reversibility.
  4. Choose one or two to test first, not to “believe forever.”
  5. Set a review point: “In four weeks, what evidence will tell me whether to double down, pivot, or switch explanations?”

You stop arguing about who has the “right story” and start asking, “Which story is most useful to test next?”


Bringing It Together (and What to Do Next)

When several explanations seem to fit, don’t freeze until you find The One True Story. Use Occam’s Razor to avoid needless complexity, Bayesian thinking to keep updating your beliefs, and small experiments to limit downside while you learn. Over time, this turns uncertainty from a blocker into a competitive advantage.

If you want to keep sharpening how you use questions to make better bets, follow QuestionClass’s Question-a-Day at questionclass.com.


Bookmarked for You

Here are three books that deepen the ideas behind choosing among competing explanations:

Decisive by Chip Heath and Dan Heath – A practical guide to avoiding common decision-making traps and widening your view before you choose a path.

Superforecasting by Philip E. Tetlock and Dan Gardner – Shows how top forecasters update their beliefs over time and make better probabilistic bets under uncertainty.

The Black Swan by Nassim Nicholas Taleb – Explores how rare, high-impact events can wreck simple narratives and why you must respect extreme downside risk.


 QuestionStrings to Practice

“QuestionStrings are deliberately ordered sequences of questions in which each answer fuels the next, creating a compounding ladder of insight. Use this one when you’re torn between several stories and need to decide what to try first.”

Actionable Explanation String
For deciding which explanation to act on:

“What are the 3 most plausible explanations for what I’m seeing?” →
“What evidence do I have for each, and what’s missing?” →
“What’s the smallest, safest experiment I could run to test each one?” →
“What’s the cost if I’m wrong about each?” →
“Given my goals, which explanation should I test first—and when will I review the results?”

Try weaving this into retros, strategy sessions, or journaling to turn confusion into a concrete action plan.

In the end, you don’t need certainty to move—you just need a smart way to place your next bet, and permission to change your mind afterward.


Comments

Popular posts from this blog

Will AI Shift Tech from Binary Thinking to Natural Fluidity?

How Do You Adapt Your Communication Style to Fit Your Audience?

What's the best balance between specializing and broad knowledge?