In one of my first professional experiences, at the end of the 80s, I was in charge of the methods team at an industrial site. Our team included specialists in time studies, who had applied MTM (Methods Time Measurement) methods to best evaluate routings. In particular, these routings were used to define productivity targets.
We tried to use these routing data to project workloads and staffing requirements but to no avail. Adding up all these detailed and theoretical times, even if they were based on scientific methods of work analysis, produced unrealistic results.
To arrive at a useful approach, we took a step back and reasoned in macro terms. For example, in the assembly sector, we measured the number of parts manufactured per assembly line, by period, as a function of the number of operators per line.
It was a rough approximation. We had simple and more complicated products. The product mix could change over time.
Yet this approach had enabled us to set up effective load/capacity management, build consensus with production teams, and fuel improvement initiatives – far better than detailed routings had ever done.
I’m talking about a time when we were just starting to use PCs, with an Intel 8086 or at best 80286 inside… Part of our need for simplification was linked to these limitations at the time.
Today, we have much more powerful computing resources, and our MES can collect highly detailed data, but adopting a macro approach based on proven reality remains the recipe for effective load/capacity management.
Cost and target bias
In most of our cost-centric industrial companies, routings are primarily designed to establish production costs. As a rule, they already include targets when they are drawn up. The cost must be less than X.
These routings are then used to measure productivity. Their use will therefore be influenced by the objectives of each stakeholder.
A production manager will perhaps encourage you to establish conservative routings, to show a good performance. By way of an anecdote, I once knew a chemical company whose production manager was proud to display OEEs in excess of 100%…
Conversely, I’ve also known some very proactive operations managers, who were planning to increase capacity significantly in the coming weeks and months, due to ongoing improvement projects – or just because they were optimistic by nature. We’re going to be able to do 10% better, so there’s no need to recruit new operators. In general, this translates into an accumulation of delays…
Measuring demonstrated capacity
The right approach, of course, is to use the demonstrated capacity to plan the load for the coming weeks. We have repeatedly demonstrated that we can make an average of 1,200 good parts per hour, so we plan for 1,200 parts per hour. It’s common sense.
But it’s not always that simple.
If you have flow manufacturing lines, this approach works well.
If you’re in a “job shop” environment, with shared resources, products with disparate routings, and combinations of operations timed by machine and others by man, establishing demonstrated capability is a different kettle of fish.
In this environment, we recommend first identifying constrained resources, and then measuring the number of hours of theoretical capacity actually delivered by this constraint over time. If, for example, we have a constrained machine that is open 24 hours a day, and over the last 8 weeks we have achieved on this machine an average of the equivalent of 17 hours of theoretical routing time, we will plan 17 hours of load per day for the coming weeks (or even months, for Rough-Cut Capacity Planning (RCCP)).
What we’ve just said requires appropriate scheduling and execution tools – we’ll talk about it if you like 😉
Adjusting job released with pull flow
But, you may say, if we plan based on proven historical capacity, we’ll never improve! Production needs targets, and we need to release a significant load so that everyone can see that things are growing and that we’re giving it our best shot!…
We know what happens when production releases are too high: backlogs build up, priorities are confused (everything becomes urgent), etc.
On the other hand, if the key manufacturing stages – the constraints – are loaded based on their demonstrated capacity, and are getting ahead of schedule, the rate of releases must be synced to the actual throughput of the constraint. In this way, it will continue to be fed without interruption, and the demonstrated capacity will increase!