Two Weeks of Nothing
Why waiting for perfect information is itself a decision—usually a bad one
I advised a company that builds enterprise support software used by companies serving tens of millions of end customers. Their team had spent two weeks in complete paralysis over a product launch decision when their CEO reached out requesting I intervene and help his team.
They’d built an intelligent routing feature for their enterprise support software—the kind that matches incoming customer requests with the right agent (human or AI) based on request complexity. Incorrect matching meant a senior agent’s time might be wasted on password reset requests or, worse still, a customer with a complex API integration issue could be routed to a chatbot whose knowledge base didn’t include API troubleshooting.
The Testing Dilemma
The team had done their homework. They’d analyzed six months of support tickets and found that 92% were straightforward requests—password resets, account questions, basic troubleshooting. The remaining 8% were complex technical issues requiring senior engineers.
Their launch checklist included load testing, but here’s where it got complicated: the requests were interactive and highly variable. A customer might submit two sentences or a detailed technical breakdown with logs, screenshots, and code snippets. Processing time ranged from 200 milliseconds to 8 seconds.
The engineering team split into two camps:
Team “Test Everything”: “We need to test with maximum complexity requests. If the system crashes when an enterprise client submits 5,000 complex tickets during a service outage, we’re finished. We’ll have their VP of Engineering on the phone while their entire support operation is offline.”
Team “Ship It”: “That’s not realistic. Testing worst-case scenarios will cost two more weeks and $50,000 in cloud infrastructure. We’ve already delayed this feature twice this year—we committed it to three enterprise prospects in their contracts.”
They compromised. They tested with weighted averages matching actual ticket distribution—70% simple, 20% moderate, 10% complex. The tests passed. The system handled 10x their projected peak load with room to spare.
They were ready to launch. The final sign-off meeting should have been a formality.
The Question That Changed Everything
In the final sign-off meeting, the CEO asked: “What’s our rollback plan if we’re wrong about the average?”
This should have been a fifteen-minute conversation. They had feature flags. If response times spiked above 3 seconds for more than 5 minutes, the system would automatically fall back to their old random routing system. They planned to monitor closely for the first 48 hours.
Instead, that single question triggered two weeks of organizational paralysis.
Team “Test Everything” saw validation: “See? The CEO is worried. We need more testing. Let’s run the worst-case scenarios.”
Team “Ship It” pushed back: “No amount of testing will predict actual customer behavior. We need real-world data. The only way to get that is to launch.”
Management got nervous and proposed alternatives: “Maybe we should pilot with just one client first. Or add more monitoring. Or build a more sophisticated fallback system.”
Engineering calculated that each option would take anywhere from three days to two weeks.
So the team did nothing.
Not more testing. Not launching. Just meetings about meetings about risk.
The product manager created a risk assessment document. Engineers built probabilistic models predicting various failure scenarios. Daily stand-ups became philosophical debates about acceptable risk levels. The #launch Slack channel became a graveyard of competing proposals.
What I Told Them
When I walked into their offices, the team expected me to help them decide: do more testing or launch now?
I told them that wasn’t the question anymore.
The real problem wasn’t uncertainty about load testing. The problem was that they’d let a reasonable question become an excuse for organizational paralysis. They’d convinced themselves that making no decision was safer than making either decision.
But here’s what they missed: not deciding was itself a decision. Every day they delayed was a day they chose the status quo over progress. A day their competitor gained ground. A day their team’s morale eroded further.
They were waiting for certainty that would never come.
Two Types of Uncertainty
I walked them through a framework I teach my business students when they’re wrestling with decisions.
There are two types of uncertainty:
Reducible uncertainty: Things you can actually learn more about through research, testing, or analysis. If you don’t know your customer’s preferences, you can survey them. If you’re unsure about system performance under load, you can test it.
Irreducible uncertainty: Things that no amount of research will tell you. How will customers actually behave once the feature launches? Will the assumptions about ticket distribution hold true? What unexpected ways might users interact with the system?
The team had already addressed the reducible uncertainty. They’d done the analysis. They’d run the tests. They’d built the rollback mechanisms.
What remained was irreducible uncertainty—and no amount of additional testing or planning would eliminate it.
The Real Question
I suggested reframing their question. Instead of “Do we have enough information to guarantee success?” I asked them to consider: “Do we have a good decision-making process?”
Because here’s what most teams get wrong: they conflate good decisions with good outcomes.
A good decision can lead to a bad outcome. You can do everything right and still fail because of factors beyond your control—market timing, competitor moves, unforeseen technical issues, changes in customer expectations.
A bad decision can look brilliant in hindsight. You can skip essential testing, launch recklessly, and still get lucky when nothing breaks.
The goal isn’t to guarantee outcomes. The goal is to make good decisions with the information available.
What Good Decision-Making Actually Looks Like
I had them work through a simple framework:
1. What can we actually know?
Ticket distribution from historical data: Yes
System performance under simulated load: Yes
Exact customer behavior after launch: No
Whether assumptions will hold in production: No
2. What’s the cost of waiting?
Delayed roadmap execution, potential market share loss to competitors
Team morale degradation
Engineering resources tied up in analysis paralysis
Delayed revenue from new feature
3. What’s our learning strategy?
Feature flags for instant rollback
Monitoring dashboards tracking response times
Customer feedback channels
Weekly review cadence for first month
4. Is this decision reversible?
Yes. They could roll back in minutes if needed.
Not a “bet the company” moment.
Low cost of being wrong, high cost of not learning.
Once we mapped it out this way, the answer became obvious.
They launched three days later.
What Happened Next
The feature worked. Not perfectly—there were some edge cases they hadn’t anticipated, and they did roll back briefly on day four when an enterprise client’s outage created an unusual spike in complex requests. But they learned, adjusted, and had the feature stable within two weeks.
More importantly, they learned something about decision-making: waiting for perfect information is itself a decision, and often a poor one.
Why This Matters for You
If you’re in product leadership, you face this tension constantly:
Do we launch with the MVP or build more features first?
Do we address this customer complaint now or wait to see if it’s a pattern?
Do we invest in this technical infrastructure upgrade or focus on new product development?
The instinct is to gather more information. Run more tests. Get more feedback. Build more consensus.
Sometimes that’s right. But often, it’s just fear dressed up as diligence.
The question isn’t “Am I certain this will work?” The question is “Do I have a good process for making this decision and learning from the outcome?”
Because here’s the truth: in complex systems—whether it’s software, organizations, or markets—you can’t eliminate uncertainty. You can only develop better processes for navigating it.
The Framework
Next time you find yourself or your team paralyzed by uncertainty, try this:
1. Distinguish between reducible and irreducible uncertainty
What can you actually learn more about?
What will remain unknown no matter how much analysis you do?
2. Assess the cost of waiting
What are you giving up by not deciding? What’s the opportunity cost?
Is delay itself a decision?
3. Check if the decision is reversible
Can you course-correct if you’re wrong?
What’s the actual cost of being wrong vs. the cost of not learning?
4. Focus on process, not outcomes
Do you have good information?
Are you considering multiple perspectives?
Do you have a learning strategy?
Perfect information is a mirage. Good process is real.
The CEO’s question—”What’s our rollback plan?”—wasn’t actually asking for more testing. It was asking whether they’d thought through how to learn quickly if their assumptions were wrong.
The team that had spent two weeks paralyzed had confused the two.
What decisions are you delaying while waiting for certainty that won’t come? I’d love to hear about the uncertainty you’re navigating.