top of page

“Forecasting the Future” is this month’s artistic interpretation by Paul Frantz of the Ask the Experts discussion. How accurate are you with predicting the course of your own life? Looking back at your life, the future uncertainties would have been better described by ranges and not by point estimates. The future is, at best, a wide interstate highway and not a narrow path.


 

This month’s Ask the Experts is a one-on-one lightning round with Gustavo Vinueza. Gustavo has extensive, international experience in forecasting, uncertainty analysis, and simulations. As a longtime portfolio manager, custom solutions leader, and technical support specialist for Palisade Corporation, makers of the DecisionTools Suite and @Risk, Gustavo has worked for many years with a wide variety of companies and their executives in the decision-making space, helping to anticipate trends and borderline situations. He is a frequent speaker in forums on several continents related to uncertainty and simulation. Gustavo has an undergraduate degree in systems engineering and master’s degrees in both business administration and finance.

We were pleased to have Seth Robertson moderate the conversation. Seth is an accomplished engineer, project manager, and program administrator who currently leads the finance & asset management practice for a Raleigh-based consulting firm. As the former state section chief of the entity that provides infrastructure grants and loans to units of local government, Seth has experienced the good and the bad of forecasts developed by owners and their consultants.

Forecasting is an expansive topic that is essential to solving important and unusual problems. Several important themes emerged from the session and follow-up discussions. First, there are more than one method to developing forecasts and we often spend too much time arguing about the best method. Second, Monte Carlo analysis is a cheap, fast, and simple method for gaining important insights. Third, uncertainty should be embraced because it is reality. And fourth, we do not do enough back-testing and post-mortems of the forecasts we make.

How important is understanding “cause and effect” prior to building a model to forecast the future? Can we simply look for trends in a past data set?

JD: All good models start from understanding the handful of primary causes that drive an answer.

GV: Cause and effect is fundamental. A quick victory is to start with a detailed examination of your data to see if it makes sense.

JD: From the school of hard knocks, I have had my share of experience in diving into the model and the forecast too quickly under the assumption that the data was valid. Later I found out that the data was not valid or that we did not have the data that supported cause and effect.

GV: A practical tip is to examine the data with a correlation analysis. You will gain some insights as to whether the data trends in a manner consistent with your understanding of cause and effect.

What is the toughest part of forecasting an uncertain future?

GV: Explaining the results to the customer

JD: I expected you to say something about the data or technical analysis. That answer really surprises me.

GV: Validating the models would be my second toughest part, but it is a distant second. The first one is the one JD has mastered – explaining to the customer.

JD: For me, the toughest part is validation because we often do not have the money on the back end to validate results or to validate the inputs, we modeled are truly causal. We mentioned my next aspect that I find toughest, which is understanding cause and effect before we build our model.

Are using point estimates for input parameters the best way to do forecasting?

JD: No, we tend to miss widely when we use single-point estimates to forecast.

GV: To me, the point estimate is best for what we call version zero. Maybe it is a start, but you must improve it.

JD: With that said, I do not think Monte Carlo analysis is relevant for every forecast. It really depends on the context.

GV: For some types of problems, you may be in the ballpark with high, medium, and low estimates of input parameters. Forecasts based on a single point estimate for each parameter are science fiction.

What is the biggest advantage of using probabilistic analysis, such as Monte Carlo analysis, for forecasting?

GV: In a simple way it allows you to explore uncertainty. It forces you to start thinking in ranges.

JD: Thinking in ranges and explaining in ranges. In other words, you can shift the conversation from “can you make this budget” to “if you give me that much budget, this is our chance of success”

GV: It’s a little bit of an argument with people who are very mathematical and like big equations, but Monte Carlo gives us simple models in minutes rather than weeks or months.

JD: That may be good for the modeler but what about the decision-maker?

GV: It is a tricky question. If we simplify too much, then we underfit and our forecasts are bad. We have to have some standards established up front as to what is acceptable and what is not. And that is true for any type of model.

What is the biggest disadvantage of Monte Carlo analysis?

JD: Mathematically and statistically, people get wrapped around the axle and worry too much about where and how the distributions came from. It is important to get started and make sure you add more examination as you gain initial insights.

GV: Not paying attention to the distributions is a recipe for disaster. It is so easy to be wrong and the intuitions will look good enough. People can go for years and not realize they are wrong.

JD: My second disadvantage of Monte Carlo is continually having to get people to understand that it is better, cheaper, and faster than it used to be. There should not be a cloak of mystery on it anymore.

GV: Monte Carlo can be like having a hammer. The analyst must apply the controls. Monte Carlo does not have a built-in control.

How important is it to select the correct distribution for input parameters?

GV: It is only one part of the model. Get the model right first and then you can fine tune the model. The distribution should not drive the model.

JD: I agree with starting simple and building complexity. At the same time, you can’t “forget” to re-examine or improve the initial distribution.

GV: If you have some past data, use the distribution fitting that is in the software. There are few cases where you have all of the data you want or think you need. It is important to get started.

JD: Yes, get started and gain insights.

What is the most overlooked value of forecasting using Monte Carlo analysis?

JD: You do not have to have perfect data. Like what we were talking about, I can use the data I have and some expert elicitation to get a forecast and gain some insights. When we use single point estimates, we feel like we have to have perfect data, a lot of it, and it take a long time to get started.

GV: Mine is the value of correlations. Correlation creates the right tails in the model.

JD: Agree and getting the tails right is something other approaches can’t easily provide. It reminds me that Monte Carlo is a good complement to other methods. Although I prefer Monte Carlo as a primary tool, I often do not argue about it and simply use it as a triangulation tool for other forecast approaches.

GV: Monte Carlo is a good tool that provides good insights. One of its overlooked values is that it should help us think in ranges and help us to appreciate other approaches.

Both of you have worked in many different fields. What has surprised you most when it comes to forecasting?

GV: Many times simpler is better. Sometimes simpler is as much as you need. Even simpler than Monte Carlo.

JD: Most organizations do not use all of the data they have. They either don’t look for it or they think it’s not worth the effort.

GV: I have seen that in some cases people don’t have the time. Regretfully, sometimes they are just lazy.

JD: The other thing that has surprised me is the number of organizations that think they can just buy some software and good forecasts will just follow. This has generally gotten worse over the course of my career.

What is the best single tip that you would give someone who is preparing to lead a forecasting effort?

GV: Explore and understand the variables. Print it, graph it, and visualize it.

JD: I agree. Another key is to get enough different perspectives involved - finance, engineering, operations, maintenance, safety & security, and customer service. It helps us understand cause and effect as well as the different sources of the data that are available.

GV: One of the worst mistakes we have made is not getting the right people involved early. They got involved late, and we ended up being questioned on what we used some day and not others.

JD: I have had the same experience. Not just with data. Also, with methodology.

What else bothers you about forecasting in practice?

GV: So many people don’t do back-testing of the forecasts. It is really important to look back after a year and see how good the model is.

JD: I wholeheartedly agree. We don’t do back-testing and post-mortems enough. Sometimes we do not have the time or the funds to do so. Sometimes we just do not want to know.

GV: That’s right. We can’t get better if we don’t know if our forecasts were good or bad.

JD: We do not learn either. Or make better decisions.

 

Seth’s Sidebars

1. The biggest issue I have experienced is the ongoing confusion of correlation and causation.

2. The tendency is to throw out the forecasting effort as meaningless when the results deviate from the prediction.

3. The value of Monte Carlo analysis is getting away from THE result and focusing on the range of results.

4. For better or worse, most people probably know of Monte Carlo simulations through the popular writing of Nassim Taleb and their use in finance. Monte Carlo is bigger than that.

5. If you wait until you have all the information, it is too late. Better to get started but not hesitate making changes as more data is obtained.




Good milestone reviews analyze the parts of a project but should not make team members feel like they were the subjects of an autopsy.
Good milestone reviews analyze the parts of a project but should not make team members feel like they were the subjects of an autopsy.

Project managers and their organizations are challenged by the necessary milestone reviews that are part of every program or large project. Milestone reviews analyze the pieces of the whole to determine what has happened, but a milestone review should not make team members feel like they were the ones on the slab.


Anatomy of an Autopsy

An autopsy is the examination of a body after life is over. The aim of a post-mortem is to determine the cause of death. It starts with a large, deep, Y-shaped incision that is made from shoulder to shoulder meeting at the breastbone and extending all the way down to the pubic bone.


The next step is to peel back the skin, muscles, and soft tissue. The chest flap is pulled up over the face, exposing the rib cage and neck muscles. Two cuts are made on each side of the rib cage. The rib cage is pulled from the skeleton after dissecting the tissue behind it. The organs are either put back into the body or incinerated. The chest flaps are closed and sewn back together. The skull cap is put back in place and held there by closing and sewing the scalp.


The funeral home is then contacted to pick up the deceased.


Your Most Recent Milestone Review

Does an autopsy sound like your most recent project milestone review? Did someone feel like they needed to go to the funeral home after it was over?


It is no wonder that we often do them poorly or not at all. Like the autopsy, they are not pleasant. Nevertheless, they are necessary.


Basics for a Better Milestone Review

Four things can make your project milestone reviews better.

  1. Do not do a milestone review on every task. Just as we do not do an autopsy on every deceased person. Reserve the milestone reviews for the time periods or collection of tasks that matter most.

  2. Approach the milestone review with a positive attitude. Just like an autopsy seeks the truth, view the milestone review as an opportunity to learn and improve.

  3. Seek an experienced facilitator. Autopsies are carried out by doctors who specialize in the nature and causes of disease (pathologists). Not everyone can or should do an autopsy. Like an autopsy, the quality and relevance of the milestone review will be directly related to who leads it.

  4. Allow ample time. An average autopsy case takes about four hours including the paperwork. That is about right for a milestone review too. Obviously, some reviews are more complicated and take all day.

 

JD Solomon Inc provides solutions for program development, asset management, and facilitation at the nexus of facilities, infrastructure, and the environment.




Graph paper with a forecasted trend.  Communicate with FINESSE!
Spreadsheets are the most powerful and most used forecasting tool; however, 90% of all spreadsheets contain errors.

Proofread your spreadsheets. Time and again, project after project, we see this common mistake when we do third-party reviews. And it invalidates the fancy theories and hard data collection work. The most common mistake in forecasting is not adequately proofreading the spreadsheet.


Here are the three basic items for a proofreading checklist to help you do a better job.


Develop an influence diagram. An influence diagrams is a visual display of a problem that depicts objectives, key elements, and dependencies. Influences among the different aspects are shown by connecting arrows. Yes, there are other preliminaries like problem statement, boundary conditions, data verification, choosing the right tool. Let the influence diagram keep it simple, concise, and visual.


Establish activities to validate the three levels of quality. More specifically, the model calculates results without crashing; the model calculations do what is intended (logic errors); and, the model is well constructed and easily transferable to someone else. There are many ways to address each of these categories. The most important issue is that the review activity includes at least one activity in each category.


Specify good practices. The third aspect of the proofreading checklist is to include 5 to 10 items that an organization or analytical team agree are most important. Eight potential candidates for good practices include:


  1. Include an input tab and a results tab

  2. Color code cells differently for inputs, outputs, and calculations

  3. In equations, use input names rather than cell references (Battleship language)

  4. Input names should be one or two words

  5. Lowercase letters should be used in all equations

  6. Use a working model, and label the working model different from the master version

  7. Specify code review tools and approaches that are used for quality assurance

  8. Identify if (and where) legacy code was imported and utilized

 

JD Solomon, Inc provides services at the nexus of facilities, infrastructure, and the environment. Contact us for more information on forecasting funding demands for the future renewal & replacement of assets.



Experts
bottom of page