To size our stocks and replenish our products, we need to make assumptions about demand. In a Demand Driven approach, supply chain scheduling is based on real consumption. Yet consumption is calculated according to products that are available in the market—and that were initially supplied according to some sort of demand hypothesis.
The DDMRP buffers are sized and adapted continuously, according the average daily usage of an item. How is this average consumption rate calculated? According to rules that are determined during the model’s design and adjusted if necessary.
Of course, average consumption per day is an estimate. The real consumption that happens in days to come will differ from this estimate. The objective is thus to establish a rule for calculating the average daily usage that is reasonable enough that the buffers will respond correctly to the range of real demands to which we may be exposed.
Past Performance is No Guarantee
We are caught in a dilemma. What data should we use to evaluate average daily usage? We have historical consumption data. We also have indications on future events that could influence these consumptions. Or not.
We also know that our environment is increasingly volatile, uncertain, complex, ambiguous — which means that the future cannot simply be derived from the past. Here we are.
When we designed our Demand Driven model, we took care to position decoupling points so that our lead times on each stocked position are reduced, and therefore allow us to adapt our supply chain step by step. This means that if our actual consumption differs significantly from the average daily consumption we used to size our buffers, we will adapt faster.
The False Debate About Forecasts
You may have noticed that since the beginning of this article we have been talking about “demand assumptions” and other “estimates.” I have been careful not to use the term “forecast.” I did so deliberately so as not to start a debate.
Over the decades, forecasting has become a discipline in its own right in many companies, with dedicated teams, “demand reviews,” sophisticated AI and predictive software, probabilistic approaches, etc. However, forecast accuracy is stagnating or even deteriorating, simply because the environment is less and less predictable. Many companies spend a lot of energy measuring the reliability of their forecasts and arguing about the best way to measure it — bias, mape, wmape, forecast added value, and so on.
Several studies show that the consensus resulting from the collaborative forecasting process often does not statistically give a much better result than a “naive” forecast that’s based on simple exponential smoothing, for example. In short, a lot of resources are spent, for a generally disappointing result.
Aha! then comes the Demand Driven convert, I told you: nothing better than the real demand, THE average daily consumption.
But we need to establish a rule of thumb to calculate this. If we base our calculation on history, we need to determine a time horizon, and clean up our history of outliers. We also need to consider our growth or seasonality.
In short, we need to apply typical techniques … of forecasting processes.
History, Forecasts, or Both?
Some recommendations gained from Demand Driven transformation implementations:
If you do not have a forecast available:
- Calculate average daily consumption based on historical data and assign growth or seasonality factors.
- You can also create a pseudo-forecast to project historical assumptions into the S&OP module — projecting based on last year with correction coefficients for example — and evaluate different scenarios.
If you have forecasts established:
- Calculate for each item the historical average day consumption, as well as the average day forecast. Note that these two measures are averages over defined horizons, to smooth out the demand variability noise.
- Compare these two measures and determine for each item which measure makes the most sense: historical, forecast or a mix of the two. This determination can be done either by item based on the planner’s knowledge, or in mass according to rules — which, for example, consider the life cycle of items, and promotional events.
A few examples:
- For new products, use forecasting or mimicry of the product(s) replaced by the new product.
- For promotional products — switch to forecast at least at the beginning and end of the promotion.
- For a mature product: history or mix generally give better results.
Establishing the Daily Consumption of Components
The above considerations apply directly to finished products / items exposed to independent demand.
For components at lower BOM levels, or upstream of the distribution network, when parent forecasts are available, we recommend activating the child forecast explosion.
This decomposition can be done either as:
- Declination of the daily sales projected on the parents, using the coefficients of BOM links and lead time offset — in other words, just exploding the pace of expected sales.
- Consumption rate on child items based on projected stock fluctuations on parent levels. This second method allows to align the children buffers on the parents’ inventory prebuild, or on the limitations linked to capacity constraints. It is preferable but requires a good level of DDS&OP management maturity.
History by Default, Forecast by Exception? Or the Other Way Around?
Roughly right is always better than precisely wrong… The average daily usage used to size your replenishment loops is still an estimate — so it is more important to be pragmatic than scientifically accurate.
At Intuiflow, we have clients who use historical data by default — because it is less biased for them — except for promotional periods when they switch to forecasts.
We also have clients who by default size their buffers on forecasts — especially for finished products — because they consider that the forecasting discipline is reasonable. They switch to historical data by exception. Sometimes this initial choice was made for reasons of acceptability: we came from a logic strongly centered on forecasting, let’s not jump into an unknown territory…
In any case, keep an eye on the behavior of your buffers: if they go down too low or up too high and you correlate this to an inadequacy of the calculation mode (in this case, it would have been better to calculate on forecast/history/mix), adapt yourself.
Adapt the calculation method directly to the result on the behavior of your stocks and of your lead times, rather than vainly trying to improve the accuracy of your forecasts… what matters is the outcome for your clients!