The only certainty is that nothing is certain. Pliny the Elder, Roman scholar, 23-79 CE
Our culture encodes a strong bias either to neglect or ignore variation. We tend to focus instead on measures of central tendency, and as a result we make some terrible mistakes, often with considerable practical import. Stephen Jay Gould, naturalist, 1941-2002
This post establishes the importance of sustainable estimating techniques. These techniques can be incorporated into risk management and business case estimating as well as scheduling within sustainable change delivery. This blog post is part of a series that provides the foundation for understanding sustainable change delivery.
The Risk of Usurping the Sponsor’s Authority
I frequently see situations where project teams provide the estimated duration, resources and cost to complete a project as just a series of individual numbers (i.e. six months, five resources and $500,000). Each of these numbers represent a range of numbers each with its own probability of being met.
In many industries and organizations, sponsors could not even be advised what the impact of a change in the estimate would do to that probability… if indeed they knew what the probability of success actually was. In short, sponsors are often not provided with the full range of choices in project estimates. Due to this, project teams are usurping the decision maker’s authority to decide how much risk to take on and what they’ll accept as the probability of success or failure.
Giving decision makers a complete picture of estimate uncertainties is vital to effective planning. Telling your sponsor that the project will be done in six months, with five resources and $500,000 leaves out important information and usurps the manager’s right to manage risk (i.e. that there is only a 40% chance of success). This post will explain how to deal with this.
Understanding the Dangers of “The Flaw of Averages”
The flaw of averages states: Plans based on average assumptions are wrong on average (Savage, p.3, 2012). A classic example of the Flaw of Averages involves the Statistician who drowned crossing a river that was on average 3 ft. deep, as shown in exhibit 1.
This is the classic problem with averages, expected value or central tendency. Here is another simple example.
“Imagine that you and your spouse have an invitation to a ritzy reception with a bunch of VIPs. You must leave home by 6 p.m. or risk being late. Although you work in different parts of town, each of your average commute times is 30 minutes.
So if you both depart work at 5:30, then you should have at least a 50/50 chance of leaving home together for the reception by 6 o’clock. Suppose there really is a 50/50 chance that each of you will make it home by 6:00. Then the trip is like a coin toss in which heads is equivalent to arriving by 6:00 and tails to arriving after 6:00. Four things can happen:
- Heads/tails: You are home by 6:00 but your spouse isn’t.
- Tails/heads: Your spouse is home by 6:00 but you aren’t.
- Tails/tails: Neither of you is home by 6:00.
- Heads/heads: Both of you are home by 6:00.
The only way you can leave by 6:00 is if you flip two heads, for which there is only one chance in four” (Savage, p.2, 2012).
Before getting into the specifics, here are some useful perspectives to demonstrate how we should evaluate estimates for tasks, impact, and probability as a range due to the uncertainty.
- Assumptions (Point Value):
- Best case or ideal value (no uncertainty) provides a fixed number (i.e. exactly 1 day for a meeting).
- However, in Reality:
- Most activities in Project Schedules are represented as exact values even though exact values are almost never known !
- Or worse, they are provided a single number which is perceived as an average !
Unfortunately, averages very seldom provide value.
When averages are added together, the error rate is exponential. If one estimates 10 tasks will each take 9 months on average, the average duration is statistically over that most of the time. “Imagine a project that requires 10 separate tasks to be developed in parallel. The time to complete each task is uncertain and independent, but known to average 9 months, with a 50 percent chance of being over or under. It is tempting to estimate the average completion time of the entire project as 9 months.
For the project to come at 9 months or less, each of the 10 tasks must be completed at or below its average duration. The chance of this is the same as flipping 10 sequential heads with a fair coin, or less than one in a thousand!”
The best scenario for estimators (both for scheduling and risk) is to provide ranges.
- No Assumptions (Ranges):
- Most things we DO know are better represented by ranges and probabilities – we don’t have to assume anything we don’t really know.
- This is represented as a “threshold confidence’.
What is required is a range with a probability of success and some form of distribution.
The first step is in identifying the work or product breakdown structure, often organized by work packages. For each activity a subject matter expert should provide an appropriately detailed explanation, and then start working on the estimated duration. If appropriate ranges should be provided by calibrated experts that also provide distributions, which can then be run through simulations software using Monte Carlo or another simulations tool. The following is a modification of the Humphrey approach to earned value project management from project initiation until the integrated baseline plan. The oval entries represent our recommended modifications to provide sustainable estimations.
Estimator Calibration Training
In order to get the appropriate activity duration estimate though, people have to be trained to provide calibrated estimates. For all estimates, teach subject matter experts and risk owners to provide an upper and lower bound that they are 90% certain contains the correct answer. This normally only takes a couple of hours. The challenge is that decades of studies show that most resources are statistically “overconfident” when assessing their own uncertainty. Curiously, studies have shown that bookies were great at assessing odds subjectively, while doctors were terrible as were young professionals just out of school.
Fortunately studies also have shown that measuring your own uncertainty about a quantity is a general skill that can be taught with a measurable improvement. Training can “calibrate” people so that of all the times they say they are 90% confident, they will be right 90% of the time.
Douglas Hubbard’s book The Failure of Risk Management provides training on calibration, and he also provides a course on this topic. The next two graphs came from Douglas Hubbard’s research on the topic of calibration. He discovered that most people are significantly overconfident about their estimates, especially educated professionals:
However, as indicated in both the previous and following graph, people can be trained to provide better calibrated estimates to the 90% level:
Unfortunately, almost nobody uses these methods.
Calibrated Estimating – Calibration Aids
Here are some questions to assess your estimate:
- “The Equivalent Bet”: for 90% Confidence Interval questions, which would you rather have?
- A: Win $1,000 if your interval contains the correct answer
- B: A 90% chance to win $1,000
- Think of a couple of pros and cons for your range. How could it be wrong? Why do you think it is right?
- Being “right”. Are you focusing on being “right” instead of realistically representing your uncertainty?
- Are you hanging on to traditional expectations of “+/- 10%” or similar narrow ranges? Are you resisting wider ranges because you think they are “too wide”?
- Equivalent bet. Are you actually trying an equivalent bet on each activity?
- Try ways to avoid “anchoring”. Don’t think of one number then add and subtract an error. Instead, treat each bound as a separate binary question (e.g. are you 95% certain the value is less than the upper bound?)
Douglas Hubbard’s book The Failure of Risk Management provides training on calibration, and he also provides a course on this topic.
Here are some other useful techniques for estimating workshops:
- Control the “Story Teller”: There is often a strong temptation to explain in detail complicating factors, exceptions, historical background, etc. Resist the temptation to explain in detail why you have uncertainty. It’s a given that you have it. Provide the range.
- Remind them to not assume wide ranges are useless: If it represents your uncertainty fairly, that’s the range we want. Whether that range needs to be narrowed is another step.
- Resist “Infinite Decomposition”: You can always compute a value based on other more detailed values but at some point you have to just provide a range.
- Avoid Group-think: There is a tendency for individuals in a group to adopt the confidence of the persons who talk the most. Prompt the quiet to speak. With some groups, individual estimates obtained by email is best.
- Remind them that they are calibrated: Their performance skill at assessing odds has be proven quantitatively.
- Redirect!: When the “explanations” seem to go on too long and they avoid the estimate, prompt them with a question – “Is it more than X?, Less than Y?”
Once the task has been identified, the range and probability figured out, the estimator needs to be able to provide a distribution for the activity, or an arrangement of values of a duration range showing their theoretical frequency of occurrence. Take the low, median and high duration dates, and assign the distribution occurrence that makes the most sense. For example:
Employ a Monte Carlo Simulation Tool
A Monte Carlo simulation is a computerized mathematical technique that allows project teams to account for risk in quantitative analysis and informs decision makers. Monte Carlo simulation furnishes the decision-maker with a range of possible outcomes, and the probabilities they will occur, for any choice of action. It shows the extreme possibilities—the outcomes for the most conservative decision makers, or the outcomes for the riskiest, along with all possible consequences for middle-of-the-road decisions.
These can help guide project teams and sponsors on what options will provide the greatest chance of project success based on the scheduling model and actions input into the model.
Monte Carlo simulation performs risk analysis by building models of possible results by substituting a range of values—a probability distribution—for any factor that has inherent uncertainty. It then calculates results over and over, each time using a different set of random values from the probability functions.
Depending upon the number of uncertainties and the ranges specified for them, a Monte Carlo simulation could involve thousands or tens of thousands of recalculations before it is complete. Monte Carlo simulation produces distributions of possible outcome values.
By using probability distributions, variables can have different probabilities of different outcomes occurring. Probability distributions are a much more realistic way of describing uncertainty in variables of a risk analysis.
Following are a very simplified series of images for a Monte Carlo tool called Risk +. Personally I prefer Barbecana’s Full Monte or Palisades @Risk.
The entry screen is normally fairly straightforward. One identifies the activity name, minimum effort, most likely effort, maximum effort, and the nature of the distribution curve:
Simplified examples of the distribution curve are listed below:
The option for the number of iterations helps to provide a greater sample and reduce uncertainty. It also helps to chose which analysis option is desired, which will be described in the following images.
If the schedule and costs acceptable probability do not align with the organizations project constraints, such as completion date or overall cost, are there activities with ranges that need to be removed or tightened up or analyzed further to reduce uncertainty and fit within the known constraints?
The following two images provide an example of potential / possible completion dates and cost based on probability. The information for the sponsor is that the current plan has a 50% chance of success of meeting the February 20 20## date AND of spending 598K or less AND delivering all of the project outputs… i.e. the probability of meeting or improving on that estimate.
The following is an example of potential / possible completion costs based on probability:
The following is an example of a sensitivity analysis, which highlights those activity ranges that may require more research and refinement:
Sensitivity analysis reinforces the value of information, showing what activities require more research to improve understanding and estimates.
The project schedule is made up of estimated activities which will define possible boundaries such as time and cost. However, projects normally have thresholds or constraints, such as completion by February 15 20## and of spending less than 500K. If the project thresholds are NOT within the range of possible values, then there is a chance that the project team and sponsor would make different decisions with better measurements. Giving decision makers a complete picture of estimate uncertainties (and choices / impact) is vital to effective planning.
GPM Global recommends conducting a P5 assessment, calibrating estimators, using ranges and distributions for the appropriate activities, and employing a Monte Carlo assessment tool to properly analyze the project schedule, costs and probabilities to make informed decisions about the project’s viability and success.
Alberts, C. J., Dorofee, A. J., Higuera, R., Murphy, R. L., Walker, J. A., & Williams, R. C. (1996). Continuous Risk Management Guidebook. http://resources.sei.cmu.edu/library/asset-view.cfm?assetid=30856
APM Risk Management Specific Interest Group. (2010). Project Risk Analysis and Management Guide 2nd Edition. https://www.apm.org.uk/PRAMGuide
Association for Project Management. (2014). Project Risk Analysis and Management Guide, Second Edition. Retrieved September 21, 2015, from http://www.amazon.com/Project-Risk-Analysis-Management-Guide-ebook/dp/B00JJ0MSRK/ref=sr_1_1?s=books&ie=UTF8&qid=1442807872&sr=1-1&keywords=project+risk+analysis+and+management+guide
Bacon, R., & Hope, C. (2013). Conundrum: Why every government gets things wrong and what we can do about it by. Retrieved October 18, 2015, from http://www.amazon.com/gp/product/B00LLP1HK0?keywords=Conundrum%3A Why every government gets things wrong and what we can do about it&qid=1445198964&ref_=sr_1_2&sr=8-2
Baxter, Keith (2012). Risk Management: Fast Track to Success. Financial Times/ Prentice Hall Limited. http://www.amazon.com/Risk-Management-Fast-Track-Success-ebook/dp/B00A8N8I6C/ref=sr_1_1?s=books&ie=UTF8&qid=1442809841&sr=1-1&keywords=Risk+Management%3A+Fast+Track+to+Success
Capers Jones. 1994. Assessment and Control of Software Risks. Yourdon Press, Upper Saddle River, NJ, USA. http://www.amazon.com/Assessment-Control-Software-Risks-Capers/dp/0137414064/ref=sr_1_1?s=books&ie=UTF8&qid=1442808208&sr=1-1&keywords=Assessment+and+Control+of+Software+Risks
Sagan, Carl (1997). The Demon-Haunted World: Science as a Candle in the Dark. Ballantine Books. http://www.amazon.com/Demon-Haunted-World-Science-Candle-Dark/dp/0345409469/ref=sr_1_1?ie=UTF8&qid=1446422119&sr=8-1&keywords=The+Demon-Haunted+World%3A+Science+as+a+Candle+in+the+Dark
Connolly, T. & Arkes, H.R. & Hammond K.R. (1999). Judgment and Decision Making: An Interdisciplinary Reader (2nd ed.). Cambridge Series on Judgment and Decision Making. Cambridge University Press. http://www.amazon.com/Judgment-Decision-Making-Interdisciplinary-Cambridge/dp/0521626021/ref=sr_1_1?ie=UTF8&qid=1445222869&sr=8-1&keywords=Judgment+and+Decision+Making%3A+An+Interdisciplinary+Reader
Dallas, M. F. (2008). Value and Risk Management: A Guide to Best Practice. Retrieved September 21, 2015, from http://www.amazon.com/Value-Risk-Management-Guide-Practice-ebook/dp/B0014TS2IS/ref=sr_1_1?s=books&ie=UTF8&qid=1442809546&sr=1-1&keywords=value+%26+risk+management+a+guide+to+best+practice
Down, A., Coleman, M., & Absolon, P. (1994). Risk Management for Software Projects. Retrieved September 21, 2015, from http://www.amazon.com/Management-Software-Projects-Mcgraw-Hill-Hardcover/dp/B011YTH38W/ref=sr_1_2?s=books&ie=UTF8&qid=1442808709&sr=1-2&keywords=%22risk+management+for+software+projects%22
Hillson, D. (2007). Understanding and Managing Risk Attitude, Second Edition. Retrieved September 21, 2015, from http://www.amazon.com/Understanding-Managing-Risk-Attitude-Paperback/dp/B010EW7C7W/ref=sr_1_4?s=books&ie=UTF8&qid=1442810145&sr=1-4&keywords=understanding+and+managing+risk+attitude
Hillson, D., & Simon, P. (2012). Practical Risk Management: The ATOM Methodology, Second Edition. Retrieved September 21, 2015, from http://www.amazon.com/Practical-Risk-Management-Methodology-Second/dp/1567263666/ref=sr_1_1?s=books&ie=UTF8&qid=1442809066&sr=1-1&keywords=practical+project+risk+management+the+atom+methodology+2nd+edition
Hooker, Worthington (1849). Physician and Patient; Or, A Practical View of the Mutual Duties, Relations and Interests of the Medical Profession and the Community. Baker and Scribner. http://www.amazon.com/Physician-Patient-Practical-Relations-Profession/dp/B008UFAA6G/ref=sr_1_1?ie=UTF8&qid=1446421407&sr=8-1&keywords=Physician+and+Patient%3B+Or%2C+A+Practical+View+of+the+Mutual+Duties%2C+Relations+and+Interests+of+the+Medical+Profession+and+the+Community.
Hubbard, Douglas W. (2009). The Failure of Risk Management: Why It’s Broken and How to Fix It. Wiley. http://www.amazon.com/Failure-Risk-Management-Why-Broken/dp/0470387955/ref=sr_1_1?s=books&ie=UTF8&qid=1442807076&sr=1-1&keywords=the+failure+of+risk+management
Hubbard, Douglas W. (2014). How to Measure Anything: Finding the Value of Intangibles in Business, Third Edition. Wiley. http://www.amazon.com/How-Measure-Anything-Intangibles-Business/dp/1118539273/ref=sr_1_1?ie=UTF8&qid=1442806937&sr=8-1&keywords=How+to+Measure+Anything%3A+Finding+the+Value+of+Intangibles+in+Business
Kahneman, Daniel (2011). Thinking, Fast and Slow. Random House, Inc.. http://www.amazon.com/gp/product/0374533555?keywords=Thinking%2C%20Fast%20and%20Slow&qid=1445222667&ref_=sr_1_1&sr=8-1
Kendrick, T. (2015). Identifying and Managing Project Risk: Essential Tools for Failure-Proofing Your Project, Third Edition. Retrieved September 21, 2015, from http://www.amazon.com/Identifying-Managing-Project-Risk-Failure-Proofing/dp/0814436080/ref=sr_1_1?s=books&ie=UTF8&qid=1442809235&sr=1-1&keywords=%22identifying+and+managing+project+risk%22
Koletar, J. W. (2010). Rethinking Risk: How Companies Sabotage Themselves and What They Must Do Differently. Retrieved September 21, 2015, from http://www.amazon.com/Rethinking-Risk-Companies-Themselves-Differently/dp/B005B1LVF6
Markowitz, H. M. (1957, 1991, 1997). Portfolio Selection: Efficient Diversification of Investments, 2nd Edition. Blackwell Publishers, Inc., Malden, Mass. http://www.amazon.com/Portfolio-Selection-Efficient-Diversification-Investments/dp/1557861080/ref=sr_1_1?ie=UTF8&qid=1447198569&sr=8-1&keywords=Portfolio+Selection%3A+Efficient+Diversification+of+Investments
OGC – Office of Government Commerce (2012). Management of Risk: Guidance for Practitioners 2010 Edition, Third Edition. The Stationery Office (TSO). http://www.amazon.com/Management-Risk-Guidance-Practitioners-3rd/dp/0113312741/ref=sr_1_1?ie=UTF8&qid=1442806817&sr=8-1&keywords=management+of+risk
OGC – Office of Government Commerce (2009). Managing Successful Projects with PRINCE2™ 2009 Edition. http://www.amazon.com/gp/product/0113310595?keywords=prince2%202009&qid=1445050419&ref_=sr_1_1&sr=8-1
Reuvid, J. (2014). Managing Business Risk: A Practical Guide to Protecting Your Business, Tenth Edition. Retrieved September 21, 2015, from http://www.amazon.com/Managing-Business-Risk-Practical-Protecting/dp/0749470437/ref=sr_1_1?s=books&ie=UTF8&qid=1442809984&sr=1-1&keywords=managing+business+risk+a+practical+guide+to+protecting+your+business
Savage, Sam L. (2012). The Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty. Wiley. http://www.amazon.com/Flaw-Averages-Underestimate-Risk-Uncertainty/dp/1118073754/ref=sr_1_1?ie=UTF8&qid=1442807005&sr=8-1&keywords=The+Flaw+of+Averages
Savage, Sam L. (2012). The Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty. Wiley. http://flawofaverages.com/
Savage, Sam L., “The Flaw of Averages,” Harvard Business Review, November 2002. http://www.ideafinder.com/history/inventions/story074.htm
Sharpe, W. F. (1964). Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk. The Journal of Finance, 19(3), pages 425–442. http://www.jstor.org/stable/2977928
Taleb, Nassim Nicholas (2010). The Black Swan: Second Edition: The Impact of the Highly Improbable Fragility” (Incerto). Random House Publishing Group. http://www.amazon.com/Black-Swan-Improbable-Robustness-Fragility/dp/081297381X/ref=sr_1_2?ie=UTF8&qid=1445222701&sr=8-2&keywords=black+swan
Thibault, J. M. (2013). Calculating Uncertainty – Calculating Uncertainty. http://www.smpro.ca/sipmath/Calculating Uncertainty John Marc Thibault.pdf
Thibault, J. M. (2012). The Art of the Plan: Requirements, Models, and Probability Management. https://www.createspace.com/3966560
Westerman, G., & Hunter, R. (2007). IT Risk: Turning Business Threats into Competitive Advantage. http://www.amazon.com/Risk-Turning-Business-Competitive-Advantage/dp/1422106667/ref=sr_1_1?s=books&ie=UTF8&qid=1442808278&sr=1-1&keywords=it+risk+turning+business+threats+into+competitive+advantage