To size our stocks and replenish our products, we need to make assumptions about demand. In a Demand Driven approach, supply chain scheduling is based on real consumption. Yet consumption is calculated according to products that are available in the market—and that were initially supplied according to some sort of demand hypothesis.
The DDMRP buffers are sized and adapted continuously, according the average daily usage of an item. How is this average consumption rate calculated? According to rules that are determined during the model’s design and adjusted if necessary.
Of course, average consumption per day is an estimate. The real consumption that happens in days to come will differ from this estimate. The objective is thus to establish a rule for calculating the average daily usage that is reasonable enough that the buffers will respond correctly to the range of real demands to which we may be exposed.
We are caught in a dilemma. What data should we use to evaluate average daily usage? We have historical consumption data. We also have indications on future events that could influence these consumptions. Or not.
We also know that our environment is increasingly volatile, uncertain, complex, ambiguous — which means that the future cannot simply be derived from the past. Here we are.
When we designed our Demand Driven model, we took care to position decoupling points so that our lead times on each stocked position are reduced, and therefore allow us to adapt our supply chain step by step. This means that if our actual consumption differs significantly from the average daily consumption we used to size our buffers, we will adapt faster.
You may have noticed that since the beginning of this article we have been talking about “demand assumptions” and other “estimates.” I have been careful not to use the term “forecast.” I did so deliberately so as not to start a debate.
Over the decades, forecasting has become a discipline in its own right in many companies, with dedicated teams, “demand reviews,” sophisticated AI and predictive software, probabilistic approaches, etc. However, forecast accuracy is stagnating or even deteriorating, simply because the environment is less and less predictable. Many companies spend a lot of energy measuring the reliability of their forecasts and arguing about the best way to measure it — bias, mape, wmape, forecast added value, and so on.
Several studies show that the consensus resulting from the collaborative forecasting process often does not statistically give a much better result than a “naive” forecast that’s based on simple exponential smoothing, for example. In short, a lot of resources are spent, for a generally disappointing result.
Aha! then comes the Demand Driven convert, I told you: nothing better than the real demand, THE average daily consumption.
But we need to establish a rule of thumb to calculate this. If we base our calculation on history, we need to determine a time horizon, and clean up our history of outliers. We also need to consider our growth or seasonality.
In short, we need to apply typical techniques … of forecasting processes.
Some recommendations gained from Demand Driven transformation implementations:
If you do not have a forecast available:
If you have forecasts established:
A few examples:
The above considerations apply directly to finished products / items exposed to independent demand.
For components at lower BOM levels, or upstream of the distribution network, when parent forecasts are available, we recommend activating the child forecast explosion.
This decomposition can be done either as:
Roughly right is always better than precisely wrong… The average daily usage used to size your replenishment loops is still an estimate — so it is more important to be pragmatic than scientifically accurate.
At Intuiflow, we have clients who use historical data by default — because it is less biased for them — except for promotional periods when they switch to forecasts.
We also have clients who by default size their buffers on forecasts — especially for finished products — because they consider that the forecasting discipline is reasonable. They switch to historical data by exception. Sometimes this initial choice was made for reasons of acceptability: we came from a logic strongly centered on forecasting, let’s not jump into an unknown territory.
In any case, keep an eye on the behavior of your buffers: if they go down too low or up too high and you correlate this to an inadequacy of the calculation mode (in this case, it would have been better to calculate on forecast/history/mix), adapt yourself.
Adapt the calculation method directly to the result on the behavior of your stocks and of your lead times, rather than vainly trying to improve the accuracy of your forecasts… what matters is the outcome for your clients!