Shared Risk in
Custom Software
What a well-meaning buyer of custom software needs to know: why fixed-price contracts don't protect you the way you think, what shared risk actually means in practice, and what a good engagement looks and feels like from both sides of the table.
The Question in the Pause
You've written the requirements. Received three proposals. Selected the vendor who gave you confidence — clear timeline, clear price, clear scope. You signed the contract. The project started.
Several months later, a change order has arrived. Something you thought was obviously included turns out not to be — according to the vendor. You open the contract for the first time since signing. You find the clause. "Changes to scope will be priced as change orders. Scope is defined by the approved specification document dated..."
You read it, and for the first time you understand what you actually signed. Not what you understood you were signing — what you signed.
The specification was your best understanding of what you needed. You understood it as a starting point, a description of intent. The vendor understood it as a legal boundary. Both parties signed the same document with different mental models of what it meant. The contract has been running on those two incompatible mental models ever since.
This essay is about what to do before that moment — and about a better kind of engagement that doesn't produce it.
I. Why Custom Software Projects Fail
The Standish Group's CHAOS reports, tracking software project outcomes since 1994, find consistently that roughly a third of projects are cancelled before completion, half are "challenged" (over budget, late, or delivering less than planned), and only about a third succeed as originally planned. These figures have been critiqued on methodology, and the specifics shift between editions. The directional finding is robust: failure and partial failure are the statistical norm.
What causes failure? The CHAOS reports identify the top causes consistently as: incomplete requirements (the single most cited cause, across almost every edition), lack of user and stakeholder involvement, lack of executive sponsor engagement, poor planning, and requirements changing without a managed process for handling that change.
Notice where most of these sit. Items one, two, and three are substantially on the buyer's side. The single most common cause of project failure is a buyer who didn't know or couldn't articulate what they needed. The second most common is the buyer disengaging after the spec was handed over.
This is not to assign blame. It is to name the structural source of most failures: the project was treated as if the requirements were fully knowable in advance, and as if the buyer's job ended when the specification was signed. Neither assumption is correct.
The construction metaphor
Buyers approach custom software the way they would approach commissioning a building: describe what you want (spec), agree a price, watch it get built, receive it. This model works for physical goods because requirements are observable in advance. It fails for software for three structural reasons:
Requirements are not fully discoverable before building. Software is interactive. Users discover what they want from it by using it. The spec you write at the beginning is your best current hypothesis about what you need — not a description of what you need. Frederick Brooks states it directly: "The hardest single part of building a software system is deciding precisely what to build."
Building reveals requirements. Development is partly a modeling exercise: the developer builds a model of your domain, and as that model takes shape, both parties discover inconsistencies, gaps, and wrong assumptions in the original spec. A fixed-price contract treats these discoveries as deviation from plan. A shared-risk model treats them as expected, valuable information.
The thing being built is not independent of the builder's choices. Architecture, data model, integration strategy — these are decided during construction, not before. They require ongoing input from someone who understands the business context. The buyer is not a passive recipient of a deliverable. They are a participant in a process whose output depends on their ongoing engagement.
The estimation problem
Steve McConnell's Software Estimation (2006) documents the cone of uncertainty: at the beginning of a project, the best estimate of duration is accurate to within roughly a factor of 4x in either direction. A project estimated at 6 months might actually take 1.5 to 24 months. This range narrows as the project progresses and unknowns are resolved.
The implication is direct: the more firmly a commitment is made early, the more likely that commitment was made when uncertainty was at its highest. Fixed-price contracts for complex software lock in commitments at the moment of maximum uncertainty.
II. The Contract Trap
The fixed-price contract feels protective because it names a price and a scope. The implicit promise: if the vendor delivers the scope, you've paid the right amount; if they don't, you have legal recourse. Both halves of this promise are weaker than they appear.
The scope gap
A fixed-price contract prices what is written in the spec. Where the spec is ambiguous — and it is always ambiguous in places — the vendor has a choice: implement the cheaper interpretation, or raise a change order. Under fixed-price terms, clarification that leads to more work costs more money. The rational vendor choice, when under margin pressure, is to interpret narrowly.
You discover the gap at integration or user acceptance testing, when the feature doesn't work the way you expected. The vendor points to the spec. You say "but obviously it was supposed to work this way." Neither party is lying. Both are trapped in a document that was a hypothesis about requirements, written at the moment of maximum uncertainty.
The change order mechanism
Change orders are in theory legitimate: if requirements change, the cost should change. In practice they serve a second function: the mechanism by which a vendor recovers from an underbid.
The pattern: the project runs normally for the first phase. Change orders begin appearing at roughly the point where the vendor's contingency is exhausted. The change orders cover things a reasonable person would have considered in-scope from the original spec. The vendor is technically correct: it wasn't explicitly in the spec. You are right that it was implied. The contract has converted a collaboration into a negotiation.
The quality trade-off nobody names
When a vendor is losing money on a fixed-price job, the cost has to go somewhere. It goes to: testing shortcuts (the bug caught at unit-test level becomes a production incident); technical debt (code that works today but is structured in ways that make changes expensive next year); documentation (when the vendor's team rolls off, the software is working but nobody outside that team understands how); and iteration (the vendor stops proposing improvements or raising concerns — they are in delivery mode, not collaboration mode).
None of this is malice. It is the rational response to a contract that makes quality uncompensated.
The legal protection myth
"But if they don't deliver, we can sue."
If a vendor fails to deliver under a fixed-price contract, your legal position may be strong. What you cannot recover through litigation: the time spent, the opportunity cost of the 12 months the project ran, the cost of a new vendor who now has to understand an incomplete codebase, and the internal political cost of a failed project. Legal action typically takes 12–24 months and costs significantly in legal fees even when you win. The outcome is money, not software.
Having a legal right is not the same as being protected from harm. The right exists. The harm still happens.
III. How Vendors Price Fixed-Price Bids — and How to Read One
A vendor receiving an RFP on complex, underspecified work cannot fully scope what they're being asked to price. Their options:
Scope the risk honestly: charge for a discovery phase first, add substantial contingency, or decline to bid fixed-price on unclear scope. This is the professionally correct answer. It typically loses the bid in competitive procurement.
Scope the risk implicitly: bid competitively, but write the scope of work tightly — include assumptions that restrict the interpretation of requirements, reserve change order rights broadly, and list client responsibilities extensively. The risk is hidden in the contract language. This wins bids.
Scope the risk optimistically: bid on the assumption the project will go well. Absorb overruns if they occur. Common among smaller or newer firms. Produces losses when the project goes badly.
Competitive procurement rewards the second option: it bids lower than the first (no contingency priced) and more confidently than the third (tightly drafted scope). A buyer who selects on price and confidence is selecting for the second — and will encounter that contract language in a scope dispute.
Where to look in a bid
The assumptions section. Read it as a list of what the vendor is not pricing. "We assume all third-party APIs will be available and documented." "We assume client will provide requirements sign-off within 5 business days." Each assumption is an implicit change order trigger.
Client responsibilities. An unusually long section signals that the vendor has transferred risks to your side. Responsibilities like "client will provide detailed wireframes for all screens before development begins" create grounds for claiming you caused delay.
Change order rate. If the change order rate is significantly above the effective hourly rate implied by the project price, changes are priced as a profit recovery mechanism. If change order pricing is not specified, ask for it before signing.
Acceptance criteria. "Acceptance will be based on software functioning in accordance with the approved specifications." Note what is absent: user acceptance, business outcome validation, real-world testing. If acceptance criteria are spec-based only, the vendor's job ends when the software matches the spec — even if the spec was wrong.
IV. What Time-and-Materials Actually Is
The most common buyer misconception about T&M: it means blank check.
What T&M actually is: you pay for work that happens, with visibility into what the work is and what it costs. You have ongoing authority to direct the work — which features get built, in which order, to what level of completeness.
This is more control than fixed-price, not less. A fixed-price contract locks scope at signature. A T&M engagement allows you to redirect at any time. The cost of that redirect is paid when it happens, not in a scope dispute months later.
What well-run T&M looks like
Weekly visibility. You see time logs or velocity data weekly. If burn rate is trending high, this is visible before it becomes a crisis. Monthly reporting is too late.
Fortnightly working demos. Not status updates — working software demonstrated to you every two weeks. You should be able to say "this is right" or "this isn't what I meant" at each demo. The gap between what was built and what was needed, caught at 2 weeks, costs one iteration to fix. Caught at 6 months, it costs a rewrite.
A maintained backlog. A prioritized list of features, owned by you. The vendor doesn't decide what to build next — you do, based on what provides the most value given the remaining budget. This is the mechanism by which T&M becomes incrementally committed rather than open-ended.
A shared definition of done. What does it mean for a feature to be complete? (Tests written, reviewed, deployed to staging, UAT passed.) Without this, "technically done" and "actually works" remain different things.
The primary risk in T&M is buyer disengagement. If you are unavailable, the vendor builds what they think you want. Surprises accumulate until demo day or UAT, when they're expensive to fix. T&M requires a named person with authority to make product decisions, available to the development team during the week, not just in scheduled check-ins.
V. The Discovery Phase
A discovery phase (4–8 weeks, depending on complexity) is not a longer requirements-gathering process. It is a different kind of work: buyer and vendor jointly examine the problem space before either party commits to building a solution.
What discovery produces
A prioritized backlog — not a spec. A ranked list of capabilities, with explicit trade-off decisions: if budget runs out, this is what gets cut, this is what gets kept.
A rough architecture — enough technical scoping to confirm feasibility in the proposed budget range and to surface technical unknowns.
An updated cost estimate with a range — post-discovery estimates are significantly more reliable than pre-discovery estimates because a substantial portion of requirements uncertainty has been resolved. The cone of uncertainty has narrowed.
A shared risk assessment — both parties have named the risks explicitly. What happens if the key integration doesn't work? If the budget constrains? Both parties see each other's concerns for the first time.
The economics
A discovery phase for a $500K project might cost $20K–$60K. The resistance — "why should I pay to be told what I want?" — is intuitive but financially backwards.
The alternative is to skip discovery and start the $500K project with the same level of uncertainty. At the failure rates documented in the CHAOS reports, the expected cost of that uncertainty materializes as overruns, rework, or outright failure at multiples of the discovery cost. Discovery is not overhead — it is the cheapest risk reduction available.
You wouldn't commission a $500K building extension without paying an architect for design and planning first. The architect's fee is a small fraction of the construction cost, but it converts uncertainty into plans before the heavy cost begins. Discovery is the architect's phase for software.
The inception deck
Jonathan Rasmusson's inception deck (The Agile Samurai, 2010) is the most practical tool for the alignment work discovery requires: ten structured exercises that surface disagreements before development begins. The most valuable single exercise is the NOT list — what is explicitly out of scope. What isn't listed can be disputed. What is listed cannot. This one exercise, done honestly, prevents most scope disputes before they start.
Buyer and vendor who cannot complete the inception deck honestly — who disagree on the NOT list, who can't agree on a shared elevator pitch — should not start the project. The disagreements that would have become scope disputes have been surfaced when they're still cheap to resolve.
VI. What Good Looks Like
Good custom software engagements are recognizable by the quality of information flow. Problems surface early. Misalignments are caught at 2-week intervals, not 12-month intervals. Both parties can describe the current state of the project honestly. Neither party has a structural incentive to hide bad news.
Good vendor behavior in practice
Raises problems early, with options. Not "we've run into a complexity" but "we've found an integration issue that will add 3 days to the sprint; here are two ways to handle it." A vendor under fixed-price terms has every incentive not to raise this until it's unavoidable. A T&M vendor whose relationship depends on trust has every incentive to surface it immediately.
Pushes back on bad ideas. "You've asked us to add a feature that would require rewriting the authentication module. We think that's the wrong call for now — here's why." A vendor who always builds what you ask without raising concerns is not serving your interests. They are avoiding a difficult conversation. Good vendors have difficult conversations.
Admits uncertainty. "We don't know yet how long this will take. We'll know more in about 3 days. We didn't want to wait until next week to tell you." This is only possible when admitting uncertainty doesn't trigger a cost dispute.
Brings domain knowledge to the problem. "We've built similar systems and the approach you've outlined usually creates problems at this scale. Here's what we've seen work." This is the vendor who has understood your problem, not just your specification.
Good buyer behavior in practice
Makes decisions quickly. When the vendor asks for input, you respond within a day or two. Slow decisions don't just delay one item — they block everything that depends on it.
Names a real product owner. Not a coordinator. Someone who actually understands the problem the software is supposed to solve, has authority to make scope decisions, and is available to the development team during the week.
Gives honest demo feedback. Not "looks good" when it doesn't. The buyer who is polite at demos and then raises issues at UAT has deferred feedback to the most expensive moment.
Accepts that requirements will change, and says so. The buyer who has internalized that the spec is a hypothesis can say "I've been looking at what you built and I've changed my mind about how this should work" without shame. This is the job, not a failure.
The test at any point
At any stage of a well-run engagement, you should be able to answer four questions:
What was built last week? Is it working? What will be built next week? Is the project on track to be useful within the remaining budget?
If you can answer all four, information is flowing, decisions are being made in time, and the project has a steering mechanism. If you cannot — if you need to ask the vendor and wait for a report that interprets the project state for you — you don't have the visibility that shared risk requires.
The UK Government Digital Service model
The UK Government Digital Service, formed after a series of catastrophic large government IT failures, built the discovery-first, iterative-delivery model into public procurement. The GDS Service Standard requires services to pass through Discovery, Alpha, Beta, and Live phases before receiving full funding commitment. The explicit logic: "You can't know if a solution is right until you've tested it with real users."
Discovery produces understanding, not software. Alpha prototypes approaches and tests them with real users. Beta builds the service and opens it to actual users. Off-ramps exist at each phase: projects can and should be killed if the evidence doesn't support proceeding. This is shared risk built into procurement process at scale.
VII. The Experience
You sign the contract with a mix of excitement and relief. After weeks of proposals and negotiations, you've locked in a price and a timeline. The kickoff meeting has optimism. The first few weeks of development are uneventful — the early parts of most projects are the parts where the requirements were clearest.
Then the updates thin out. The bi-weekly calls become monthly. You log in to the demo link and the prototype is incomplete — buttons that don't respond, data that doesn't load, a core workflow mocked with static images. You flag it politely. Radio silence for 48 hours, then: "Absolutely, prioritizing that now." The milestone date passes. You extend goodwill.
A particular sickness sets in around the fourth invoice. It is not the sharp panic of a missed deadline but a dull, metabolic ache. The meetings have developed a ritualistic quality. The project manager speaks in serene, bullet-pointed assurance. "We're tracking to plan." "A few complexities, but nothing out of scope." You hear the words. Your body hears something else — the slight pause before "complexities," the way "tracking to plan" has become a mantra, devoid of the early excitement about the plan itself.
The core of the asymmetry: they control the framework of knowing. You have full visibility into your own side — your business needs, your users, your ROI calculations. Their world is a black box. You request read-only access to the repository. "Security policy." You ask for sprint burndowns. "We'll summarize in the weekly report."
You are presented with symptoms, never the disease. The symptom is a two-week delay for "environment stabilization." The disease is a foundational architectural gamble that failed. The symptom is a request for "clarification" on a requirement you thought was crystalline. The disease is that the requirement, as written, is impossible to implement with the chosen technical approach — and admitting this would require a change order conversation. So the language softens. The solid ground of the specification turns to mud. Your insistence on clarity starts to feel, even to you, like nagging.
You double down on good faith. You fly in for an on-site. You clarify requirements again. You approve minor scope adds to "keep momentum." The contract doesn't bend back. The "milestone 3 delivery" arrives as a zip file. It crashes on the first user load. You log fifty issues. Their reply: "Out of scope — performance testing wasn't included."
The most profound loneliness is within your own organization. You must translate this dread into status updates for leadership. You polish the vendor's vagueness into something resembling progress. You lie to protect your own judgment, because you were the one who championed this firm, this contract. To raise a red flag is to indict your past self. So you double down on faith.
You thought partnership; they saw transaction. The betrayal stings not as malice, but as indifference.
The other kind
A different first conversation. Before any contract is signed.
The vendor asks questions that don't sound like software questions. What does a successful Monday morning look like for your team? If you could fix one thing about how information moves through your organization, what would have the highest impact? You find yourself describing things you know well but have never articulated: the friction points, the reports everyone knows are wrong but everyone uses anyway. The vendor is looking for the actual problem underneath the stated problem. This is only possible before anyone has committed to building anything.
In the shared-risk engagement, status reports are replaced by working sessions. You are not informed of a delay — you are shown the tangled code, the two possible paths forward. "We're stuck here. Option A is a quicker fix but might cause pain later. Option B is cleaner but adds three days. What's more important to you right now?"
The asymmetry evaporates. The problem is on the table for both parties to see. Your role shifts from auditor to co-solver. Your domain knowledge — the why of the business — is mined to help solve the how of the technology. You are useful.
The financial model codifies this psychology. Seeing a weekly invoice for time is nerve-wracking at first. You are paying for effort, not completion. But this transparency forces healthy prioritization. Every invoice asks a silent question: was our time this week well-spent, from your perspective? You find yourself protecting the team's focus, killing pet features, clarifying goals — because you viscerally understand that wasted time is shared loss.
When the pivot comes — when the first working slice shows your original hypothesis was wrong — it is not a crisis. It's the point. In a fixed-price engagement, this moment is catastrophic: a scope violation, a reckoning. Here, the conversation is: "The user data is telling us they need X. Do we want to change direction?" There is no blame, because there was no guaranteed endpoint to deviate from. There is only learning and a shared commitment to applying it.
When you launch, you feel a quiet, deep ownership. You did not accept delivery of a product; you participated in building a solution. The team celebrates with you, and the celebration is genuine, because their success was inextricably linked to yours from the beginning.
VIII. Questions to Ask Before You Sign
Most buyers at RFP stage ask: what will it cost, and when will it be done? These are the wrong questions to lead with when the scope is not fully defined.
Questions for any vendor
"Tell me about a project where the requirements changed significantly after you started. How did you handle it?"
"How do you price uncertainty? What happens if it takes longer than you estimated?"
"Can we do a paid discovery phase before committing to the full project?"
"How often will I see working software? What does a sprint demo look like?"
"What are the top three things that could make this project fail, and what would you do about each?"
"Can I speak with a client whose project went over budget or over time? What happened?"
Warning signs in a vendor's bid
Fixed-price bid on unclear or complex scope: signals naivety or concealed risk.
Long, detailed client responsibilities section: the contract is structured to make failure your fault.
Vague discovery phase with no concrete deliverable: charging to think without committing to a conclusion.
Bid significantly below all other competitive bids: underbid to win; change orders to recover.
Spec-based acceptance criteria only: the vendor's job ends when the software matches the spec, not when it works for you.
Warning signs in your own behavior
"We know exactly what we want, just build it" — closing off the feedback mechanism.
Handing over a requirements document and disengaging — removing the person who knows what success looks like.
Expecting a precise cost for uncertain work — creating pressure for vendors to fake confidence.
Refusing to iterate: "show me the finished product, not drafts" — structuring the project for maximum risk.
The vendor who offers a discovery phase and can describe a clear process for it is a different kind of vendor from the one who presents a confident SoW with a fixed number. The first is managing uncertainty. The second is hiding it.
References
| Source | Relevance |
|---|---|
| Standish Group CHAOS Reports (1994–present) | The foundational empirical record on custom software project outcomes. Directional findings robust; specific percentages vary by edition. |
| McConnell — Software Estimation (2006) | Cone of uncertainty; reference class forecasting; the planning fallacy in software. |
| DeMarco & Lister — Waltzing with Bears (2003) | Risk management in software projects; why negotiating estimates down doesn't reduce actual duration. |
| Brooks — The Mythical Man-Month (1975/1995) | "The hardest single part of building a software system is deciding precisely what to build." |
| Rittel & Webber — "Dilemmas in a General Theory of Planning" (1973) | Wicked problems: problems not fully understood until the solution is being built. |
| Flyvbjerg — Megaprojects and Risk (2003) | IT project overruns and strategic misrepresentation; optimism bias in planning. |
| Rasmusson — The Agile Samurai (2010) | The inception deck: ten exercises for creating shared understanding before development begins. |
| UK Government Digital Service — Service Standard (public) | Discovery → Alpha → Beta → Live phases with explicit off-ramps; the most documented large-scale buyer-side reform in software procurement. |
| Cohn — Agile Estimation and Planning (2005) | Iterative delivery, backlog management, and incremental commitment. |
| Kahneman — Thinking, Fast and Slow (2011) | Planning fallacy; optimism bias; the psychological mechanisms behind systematic underestimation. |