Manufacturing Planning
Manufacturing Planning
Manufacturing Planning
Theory of Constraints
Davood Golmohammadi,1 S. Afshin Mansouri2
1
University of Massachusetts Boston, Management Science and Information Systems, Boston, Massachusetts
2
Brunel University London, Brunel Business School, Uxbridge, Middlesex, UB8 3PH, United Kingdom
Abstract: The literature on the product mix decision (or master production scheduling) under the Theory of Constraints (TOC),
which was developed in the past two decades, has addressed this problem as a static operational decision. Consequently, the devel-
oped solution techniques do not consider the system’s dynamism and the associated challenges arising from the complexity of
operations during the implementation of master production schedules. This paper aims to address this gap by developing a new
heuristic approach for master production scheduling under the TOC philosophy that considers the main operational factors that
influence actual throughput after implementation of the detailed schedule. We examine the validity of the proposed heuristic by
comparison to Integer Linear Programming and two heuristics in a wide range of scenarios using simulation modelling. Statistical
analyses indicate that the new algorithm leads to significantly enhanced performance during implementation for problems with
setup times. The findings show that the bottleneck identification approach in current methods in the TOC literature is not effective
and accurate for complex operations in real-world job shop systems. This study contributes to the literature on master production
scheduling and product mix decisions by enhancing the likelihood of achieving anticipated throughput during the implementation
of the detailed schedule. © 2015 Wiley Periodicals, Inc. Naval Research Logistics 62: 357–369, 2015
Keywords: job-shop operations; theory of constraints; product mix decisions; master production schedule
product mix problem [1, 5, 6, 9, 10, 12, 14, 17, 20, 22, 27, 31– We investigate the effectiveness and drawbacks of the most
33], 35, 36]. common algorithms, and demonstrate some of the funda-
TOC was first applied in production planning and sched- mental factors that they do not take into account. We argue
uling, and several algorithms to determine an optimized that the common method of identifying constraints, which
MPS have been developed based on TOC. However, most is based on the difference between available and required
of these algorithms are validated based on simple examples capacities in a static situation, is not effective for dynamic
and may not be particularly effective for large-scale and real- operations.
world operations in a job-shop system. Linhares [19] has We develop a novel algorithm called COLOMAPS: COm-
criticized the current algorithms and claimed that an effec- plexity and LOad driven MAster Production Scheduling. In
tive and optimum heuristic is simply impossible. Existing this algorithm, we define two operational factors: complexity
algorithms ignore the significant impact of inherent random- of operations and capacity shortage. Complexity is a function
of the number of products using a machine and the num-
ness in process times, which contributes to delays, inventory
ber of times a machine is needed by all products. Capacity
accumulation, idle time, and underutilization. To the best
shortage is the ratio of available capacity to required capac-
of our knowledge, no prior study in the TOC literature has
ity. This research defines the complexity level of operations.
addressed product mix decisions in the presence of con-
We consider capacity shortage and complexity as the two
straints or the impact of complex and dynamic operations main factors for the generation of data sets. Experimental
on actual throughput. design, simulation modeling, and statistical techniques are
Moreover, many instances in the literature benchmark the used for performance evaluation. We evaluate its performance
performance of solution techniques based on the throughputs by implementing it in complex case study operations and by
of small problems involving simple production flow. Such comparing it to other methods in the literature.
simplified cases do not capture the complexity of real-world The article is organized as follows. Section 2 reviews
operations such as the sequence of operations [15]. Therefore, the literature, Section 3 describes the research methodology,
the basic question for production planners is: which master Section 4 characterizes the algorithm, Section 5 presents the
production schedule is most effective? method of implementation and experimental design, Section
In summary, our main research questions in the context of 6 presents statistical analysis and discussion, and Section 7
TOC are: reviews contributions and concludes. A summary of research
steps is shown in Figure 1.
on detailed scheduling during implementation in the TOC system constraints in the first step. In the next step they pro-
approach. MPS planning under TOC is developed based on its vide a detailed heuristic approach to exploit these system
first two principles: (1) to identify the system’s constraint(s) constraints. Sobreiro and Nagano [32] discuss shortcomings
and (2) to determine how to exploit them. These principles are of prior solution techniques in failing to handle situations in
well illustrated by several studies [9, 11, 27]. Several algo- which multiple constraints exist in the system. To address
rithms have been developed based on TOC to determine an this gap, they propose a heuristic that first identifies the dom-
optimized MPS. Some of the algorithms [11, 17, 20, 27] are inant bottleneck and then develops an initial solution. The
proven to find the optimal solution only in examples with solution is improved in the next step by a greedy neighbour-
a single constraint. In some cases, the results are inefficient
hood search. However, the current algorithms in the TOC
and produce nonoptimal or infeasible solutions when cer-
approach, including Fredendall and Lea [7] and Sobereiro
tain new product alternatives are available. Posnack [29] and
and Nagano [32], miss the factors which impact the identi-
Maday [21] argue that the TOC approach should be properly
used, and for noninteger solutions, partial products should fication of the real bottleneck during operations, especially
be allowed to be manufactured in the next planning horizon. complex operations.
Others have noted that problem sarise when there are multiple We address these draw backs in the proposed algorithm
constraints, as the TOC approach might generate an optimal (COLOMAPS) and discuss the impact of key drivers for an
solution [17, 20] or an infeasible solution [4, 28]. effective MPS.
Other scholars have developed improved algorithms
[1, 3, 9, 12, 14, 22, 31, 35, 36], but drawbacks still exist in 3. RESEARCH METHODOLOGY
their methods. Primarily, most of these algorithms are val-
idated based on simple examples and may not effectively Identifying constraints, which requires meticulous analy-
scale up to large-scale or real-world operations in a job-shop sis, may not be easy in real-world job-shop operations. We
system with complex operations scheduling. Moreover, the develop a novel algorithm called COLOMAPS to generate
bottleneck identification steps used by existing algorithms an MPS under the TOC approach. The algorithm has three
do not incorporate other main factors influencing operations, phases: identification of the constraints, initial MPS devel-
such as the sequence of processes and the hidden role of opment, and improvement of the initial MPS. We propose a
queues. Finally, the performance and accuracy of existing novel approach for the identification of constraints and define
methods have been evaluated with linear optimization results two main operational factors: complexity of operations and
in a static environment. Dynamism of operations can affect capacity utilization.
all of the developed plans and add to the challenge of plan- To conduct a comprehensive study, we applied the
ning. Linhares [19] has criticized the current algorithms and COLOMAPS algorithm to a production line inspired by a real
provided a good overview of them. case in the auto industry. This is a compelling case because it
Other scholars have used intelligent search algorithms, is similar to most job-shop systems in real-world operations
such as the Tabu search (TS) [23], genetic algorithms (GA) and it contains a high level of complexity. Additionally, we
[24, 25], and a hybrid Tabu simulated annealing approach investigated two of the most complex instances of job-shop
[22], to address large-scale problems as one of the shortcom- operations in the literature [2, 12].
ings of common methods. However, these intelligent search To determine the performance of the COLOMAPS algo-
approaches have over looked the explicit process of TOC rithm, we created MPS’s for these job-shop operations using
philosophy and thus would incur low convergence efficiency our new algorithm, and using one of the most commonly
to local optima or, in some instances, provide infeasible applied heuristic algorithms [9], the recent algorithm of
solutions [35]. Sobreiro and Nagano [32] provide a review Sobreiro and Nagano [32], and Integer Linear Programming
and evaluation of constructive heuristics to optimize prod- (ILP) as benchmarks.
uct mix decisions based on TOC and emphasize the lack of A set of experiments was designed to verify the effec-
applicability of common algorithms in practice. tiveness of the COLOMAPS algorithm in comparison with
Two of the most commonly applied heuristic algorithms the aforementioned benchmark algorithms in a wide range
([9] and the recent algorithm of Sobreiro and Nagano [32]) of problem sets in static and dynamic operations. Then, the
are used as benchmarks of our proposed algorithm. Both Fre- results of the MPS development based on all four methods
dendall and Lea [7] and Sobereiro and Nagano [32] address were evaluated through simulation modeling. Ultimately, sta-
the problem as a static product mixed decision model and try tistical analysis was conducted to determine whether there
to identify the real bottleneck where it is not easily identified. are significant differences between the performance of the
Fredendall and Lea [7] propose an approach to identifying COLOMAPS algorithm and the benchmark approaches.
Naval Research Logistics DOI 10.1002/nav
360 Naval Research Logistics, Vol. 62 (2015)
In whichvalue represents normalized value and is the above order. Make sure the MPS is fea-
calculated using Eq. (2). sible at all stages, that is, there is always
d. Calculate the degree of criticality (γj ) for all enough capacity on all the critical machines
machines as follows: identified in step (a).
f.3. Calculate the total throughput (TP) of the
γj = w1 .αj + w2 .βj (5) initial MPS: TP = ni=1 Pi × CMi .
e. Sort the machines in descending order of γj . Con- g. Examine whether the total throughput can be
sider the first machine the CL along with non- improved by trading off production quantities (Pi ) of
dominated bottlenecks for further steps. A bottle- products ranked higher with those ranked lower with
neck machine k is considered dominated by another respect to CL. This idea is proposed intuitively, based
bottleneck machine (represented Mk ≺ Ml ) if on the assumption that higher-ranked (i.e., more
profitable) products cause more complexity during
Ck ≥ C , and (7) implementation. Therefore trading off production
tv,k ≤ tv, ∀v ∈ {1, . . . , n} ∧ ∃ u ∈ {1, . . . , n}|tu,k < tu, . quantities of higher-ranked with lower-ranked prod-
(8) ucts whilst keeping the same throughput is expected
to lead to a more robust solution. In other words, we
Bottleneck machines that are not dominated by others will conjecture that for a given throughput, the robust-
constitute the set of nondominated bottlenecks. These are −
→
ness of the product mix vector Q , in which products
active constraints whose loads need to be carefully moni- are ranked according to their R ratio, will increase
tored to ensure the feasibility of the MPS at any given stage. by trading production quantities from higher-ranked
Adapted from the Multiobjective Optimization literature, the to lower-ranked products. To improve robustness,
following lemma supports the above by reducing the num- examine potential trade-offs between P[u] and P[v]
ber of bottleneck machines against which the feasibility of where [] represents the order of products in the
solutions needs to be checked. sequence for u = 1,...,n−1 and v=u+1,...,n. Accept
−
→ the tradeoffs that improve throughput. To identify
LEMMA 1: If product mix vector Q is feasible for a non- advantageous tradeoffs, compare the contribution
−
→
dominated machine under the demand vector D , then it is margin of the two products on the primary bottle-
feasible for all machines that are dominated by . neck or BN 1 (i.e., the machine whose capacity runs
out first). Note that in such comparisons it is impor-
PROOF: The proof can be derived from the definition of tant to keep the order of products developed in step
dominated machines in Eqs. (7) and (8). f. Then take the following steps:
g.1. Calculate P Ri,BN1 ratios for all products
where P Ri,BN1 = CMi /ti,BN1 . This ratio
4.3. Phase 2: Initial MPS Design
represents the priority index of products
An initial MPS is developed based on the CL and bottle- based on the primary bottleneck that runs out
necksin the following steps: of capacity first (and in many cases could be
different from the CL).
f. Consider the first machine as the CL; develop a fea- g.2. Consider the products whose demands are
sible MPS through the following procedure which is not fully satisfied as potential receivers. Let
’
partly inspired by the algorithm of Fredendall and tBN 1
denote the remaining time on the pri-
Lea [7] (referred to as FL97): mary bottleneck (BN 1 ); that is, the machine
f.1. Sequence products in nondecreasing order of whose capacity is exhausted or insufficient
Ri ratio where Ri = CMi /ti,CL . In the event to produce further units of any product. In
of a tie, give priority to the product with the the case of a tie, consider the machine with
higher CM i . a higher degree of criticality (γ ) the pri-
f.2. Develop the initial feasible MPS by allocat- mary bottleneck. Determine whether there
ing the market demand of products (Pi ) in are other products higher on the list (i.e., with
Naval Research Logistics DOI 10.1002/nav
362 Naval Research Logistics, Vol. 62 (2015)
higher priority) and consider them ‘givers’. Table 1. Operations routings of the products.
A tradeoff can be justified between products Products Operations routings (Machine’s code)
g (giver) and r (receiver) if it has a positive
impact on throughput, that is, δr,g ≥ 1: A 8,7,7,3,5,3,5,4,4,5,2,2,7,7
B1 1,5,6,5,5,4,6,5,4,3,1,10,11
B2 9,12,4,11
P Rr,BN1 (t BN1 + tg,BN1 )
δg,r = > 1. (9) C 9,12,4,6,6,12,16,6,12,13,13,14,15,16
CMg D 1,2,4,5,4,5,7,8,13
E 1,4,5,4,5,4,6,7
LEMMA 2: Trading production quanti-
ties of product g to product r (in the feasible
region) has a positive (or negative) impact on parts based on a job-shop system. We selected the opera-
throughput if δg,r > 0(δg,r < 0). Tradeoffs tions of five products: A, B, C, D, and E, with 2,000, 5,000,
with δg,r = 0 have no effect on throughput. 2,000, 3,000, and 3,000 units in demand respectively using a
one-month planning horizon. Product B consisted of two raw
PROOF: The proof can be derived by materials, B1 and B2. There were 16 different machines with
expanding Eq. (9). just one of each type available for operations. Table 1 shows
the operations routings of the products. The processing times
g.3. Identify the most rewarding pair for tradeoff and capacities are presented in the Appendix (Tables 7–10).
between products g and r as follows: Table 10 presents the demand and available capacity of each
machine and the marginal profit for each product.
We selected this case study because it has a high degree of
complexity and is similar to most real-worldjob-shop systems
in several ways, including realistic setup times and differ-
ent processing times. The complexities of the case can be
summarized as follows:
g.4. Complete the tradeoff between products g • The difference between the available and required
and r step-by-step by decreasing the produc- capacity for some of the resources (machines) was
tion quantity of g one by one and exploiting not significant (this may shift the bottlenecks);
the released capacity to increase the quantity • Different parts of one product need to use the con-
of r as long as the resultant MPS remains fea- straint;
sible. Stop when no further exchange can be • One constraint feeds another constraint;
made. • The sequence of operations showed that most of the
g.5. Identify the new primary bottleneck (BN 1 ) machines (constraint and nonconstraint) were used by
and go back to step g.1 until no further most of the products; and
rewarding tradeoff can be found. • Processing and setup times were different for the same
h. Report the resultant MPS as the final solution. operation of different products.
Table 2. Complexity assignment based on operations flow (Pa ). Table 4. Distribution of test problems (with and without setup).
stopped. Following Kelton et al. [13],there was no need to • H1a : There is no difference between COLOMAPS
consider a warm-up period to reach a steady state in this and ILP.
case. This means that the designed models start out empty • H1b : There is no difference between COLOMAPS
of parts and all resources are ideal. This a terminating sys- and FL97.
tem situation, and no warm-up is needed to ignore the initial • H1c : There is no difference between COLOMAPS
conditions. and SN12.
The drum-buffer-rope technique was used for detailed • H1d : There is no difference between ILP and FL97.
scheduling in all the models. Moreover, the priority rules • H1e : There is no difference between ILP and SN12.
for operations on a machine were defined in a flexible man- • H1f : There is no difference between FL97 and SN12.
ner to maximize throughput. Products based on their mar-
ginal contributions had priorities, but if there was a product Paired sample t-Tests [34] were conducted using SPSS [26]
at the final stage of operations, the original priority rule to evaluate these hypotheses in terms of resultant throughput
was overridden to minimize the work in process and maxi- (TP) in static mode. As mentioned before, setup requirements
mize the throughputs. All these rules were evaluated through did not affect the static performance of the algorithms (none
simulation modelling tests for verification purposes. of the algorithms consider setup as an input) so we confined
The main challenge was to determine the best values for the the study to 32 paired comparisons between the base test
scheduling variables. To determine the detailed schedule for problems. Table 5 displays the results of static comparisons
the best performance, we used Opt Quest optimization soft- on these problem sets. It should be noted that the fairly large
ware. Opt Quest automatically searches for optimal solutions magnitude of the standard deviations is due to the differ-
within ARENA simulation models. In other words, for each ence between the case study and the other two test problems.
MPS implementation, input variables for detailed schedul- Throughputs in the case study are in the scale of millions,
ing such as interarrival times between batches, release time whilst the others are in thousands.
for raw materials, and arrival batch sizes were manipulated The results indicate that in the static mode, both ILP
by Opt Quest to find the best values to enhance the through- and FL97 outperform the COLOMAPS (with P = 0.024 and
put. Opt Quest facilitated the challenge of finding an optimal 0.073, respectively) whilst there is no significant difference
or very satisfactory solution based on the best set of input between COLOMAPS and SN12. Moreover, ILP outperform
variables. The models of the three job-shop systems were FL97 (P = 0.092) and SN12 (P=0.048) but there is no signifi-
simulated for 1,000 runs with 30 iterations in each run. cant difference between FL97 and SN12. As a result, the null
hypotheses H1a , H1b ,c H1d , and H1e are rejected whilst H1c
and H1f are accepted.
Such results are predictable, as the COLOMAPS algo-
6. RESULTS, ANALYSIS, AND DISCUSSION
rithm, unlike the benchmark algorithms, does not seek to
Extensive statistical analyses were carried out to answer optimize static throughput by solving a simplified problem
the following two questions: scenario without considering the complexities that might
affect performance during implementation. Considering the
1. Are there significant differences between the per- heuristic nature of FL97 and SN12, the superiority of ILP is
formance of the COLOMAPS algorithm and the justifiable.
benchmark approaches in a static mode in terms of
throughput? 6.2. Dynamic Comparisons
2. Are there significant differences between the per-
To test the performance of the resultant MPS’s dur-
formance of the COLOMAPS algorithm and the
ing implementation in dynamic conditions, the following
benchmark approaches in a dynamic mode in terms
hypotheses are proposed with regards to the actual through-
of throughput?
puts:
These questions aim to test the merits of the COLOMAPS • H2a : There is no difference between COLOMAPS
algorithm as compared to the benchmark algorithms. These and ILP.
are examined in the following subsections. • H2b : There is no difference between COLOMAPS
and FL97.
• H2c : There is no difference between COLOMAPS
6.1. Static Comparisons
and SN12.
The following hypotheses were defined to assess the dif- • H2d : There is no difference between ILP and FL97.
ference in performance of the algorithms in static mode with • H2e : There is no difference between ILP and SN12.
respect to the static throughput: • H2f : There is no difference between FL97and SN12.
Naval Research Logistics DOI 10.1002/nav
Golmohammadi and Mansouri: Complexity and Workload Considerations 365
Unlike in the static mode, it was anticipated that setup was also revealed that the difference is partly influenced by
would affect actual throughputs during implementation. As the setup requirements of the problem. Such differences can
a result, paired comparisons were carried out on the 32 test be interpreted in light of the fact that setup impacts batching
problems in two categories: without setup and with setup. decisions made during detailed scheduling.
Table 6 represents the results of the paired t-Tests in the Statistical analysis indicates that the new algorithm leads to
dynamic mode using SPSS. significantly enhanced performance during implementation
As shown in Table 6, in the dynamic mode and in problems in dynamic operations.
without setup, COLOMAPS outperforms SN12 (P = 0. 011),
but the difference between COLOMAPS and the two other 6.3. Complexity of Dynamic Operations
benchmarks are not significant. In this category, ILP performs
Lets look at the role of setup operations and some of the
better than both FL97 (P = 0. 029) and SN12 (P = 0. 008) challenges in dynamic operations. Setup time can impact
whilst FL97 outperforms SN12 (P = 0. 039). Conversely and batch-size determination, priority rule of operations and
in problems with setup, COLOMAPS performs significantly untimely throughput. We faced a sequence-dependent setup
better than the benchmark algorithms, that is, compared with situation, which occurs when a machine’s setup time for a
ILP (P = 0.066), FL97 (P = 0.005), and SN12 (P = 0.002). particular job is determined by not only that job but also
In this group, no significant difference is observed between by the previous job that the machine is currently set up for.
ILP and FL97. Incidentally, both ILP and FL97 show better Moreover, factors such as delays in move times or delays
performance compared with SN12 (at P = 0.045 and 0.044, in starting the operations schedule means that there will be
respectively). Consequently, the null hypotheses H2a and H2b blockage and starvation of resources that is not considered
are accepted for problems without setup whereas the rest are in the static schedule. The size and complexity of the opera-
rejected for the same category of problems. Conversely, all tions grow rapidly as the number of jobs and machines in the
null hypotheses except for H2d are rejected for problems with problem increase.
setup. Conversely, the common batch-size rule in the TOC is to
These results support our conjecture that considering the consider batch size the same as demand to save time for
complexity and capacity shortages in product mix decisions working hours. However, in a complex job-shop system, a
positively affects real throughput during implementation. It reduction of the number of setups in a constraint may not
Naval Research Logistics DOI 10.1002/nav
366 Naval Research Logistics, Vol. 62 (2015)
be the proper solution for enhancing the capacity of the con- are an integral part of operations, hence methods with more
straint. The overall gain of throughput with small batch sizes, accuracy with regard to real throughputs are demanded.
along with multiple setups, does more than save time and
capacity in the constraint. This strategy can minimize the WIP
and enhance profit. The resource assignment priorities based 7. MANAGERIAL INSIGHTS AND CONCLUDING
on the CM should not necessarily be followed when reach- REMARKS
ing due dates. For example, in a real production environment,
We made a first attempt to address the complex nature of
there may be two products, X and Y, which are competing
product-mix decisions by taking into account factors (com-
to use the same constraint simultaneously. Following the pri-
plexity of operations and capacity shortage) previously over-
ority based on the CM in this example, product X should be
looked in the literature regarding MPS design under TOC.
processed first. However, after this operation, product X is
The simplistic interpretation of capacity shortage in a sta-
not complete and needs more operations, whereas assigning
tic mode and the omission of operations complexity in the
this resource to product Y, a product with low priority, leads
extant literature may have an adverse impact on operations
to the completion of product Y, which can then be shipped
performance during implementation. Complexity of oper-
to the customer. Thus, based on the goal of meeting the due
ations, such as the sequence of processes and the hidden
date, it is preferable to expedite the WIP to be processed and
role of queues, are largely absent within current concept and
shipped as soon as possible, even though this priority conflicts
modeling approaches.
with the original priority based on the CM. In other words,
A novel algorithm called COLOMAPS was developed and
while product X has priority over product Y, the remaining
compared with two benchmark algorithms and ILP in a wide
time is not adequate to complete the production of product X.
range of problem instances. The results indicated that the
The highest priority should be switched to product Y, which
COLOMAPS algorithm leads to better throughput during the
needs less time to reach the end of the production line.
execution of the MPS. Until now, the common perception of
validation, comparison, and effectiveness of MPS methods in
6.4. Approach Discussion the literature was based on static operations. Dynamic opera-
tions, especially in job-shop systems with inherently complex
From the inception of TOC, there was a debate about the natures, may lead to shifting bottlenecks. As a result, the
difference between the ILP and TOC throughputs, and the performance evaluation of any planning method should be
focus of computations [20, 4]. determined via actual throughput during the implementation
Luebbe and Finch [20] mentioned “Many believe that TOC of a production plan. We emphasize that the vision of sta-
does not contribute anything new because you can accom- tic optimization for evaluation and planning operations in a
plish virtually the same thing with linear programming.” Of dynamic environment should be re-evaluated. It was further
course the common argument is that TOC is a production shown that complexity of operations, level of capacity short-
philosophy and it is beyond the scope of an optimization age, and setup requirements are the three main factors that
technique such as linear programming. Although this is a influence dynamic performance.
reasonable and valid response to the argument, we would The contributions of this research are summarized as
like to discuss it briefly from another angle. The main moti- follows:
vation of this research was that the actual throughputs of a
method should be the basis of effectiveness and performance • The common method of identifying constraints,
in dynamic operations, not the throughputs in a static mode which is based on the difference between available
of operations. What management would like to see is the and required capacities in a static situation, is not
accuracy level between the MPS and the actual throughputs. effective for dynamic operations. These initial con-
Satisfying demand and meeting deadlines are challenging straints may not be real or effective constraints during
and crucial factors for success. From this point of view, the operations, especially during complex operations.
ILP approach may not be a solid and strong solution. Deter- • Current approaches based on optimization in dynamic
mination of the real bottleneck and of the hidden impact of environments are shown to lack accuracy. We demon-
other factors such as work in process, sequence of operations, strate that current concept and modelling approaches
and number of setups is out of scope of its computation. As do not attend adequately to the complexity of
a result, for real and complex operations, there is a differ- operations and recommend revisions in the vision of
ence between outcome and real throughputs. Other heuristic optimization for evaluation and planning operations
approaches have also overlooked this vital point and have in a dynamic environment.
proven their performance in a static mode of operations. • The current approach for evaluating MPS design tech-
The results analysis shows that our proposed approach per- niques was challenged and it was concluded that
forms well for dynamic operations, especially when setup models optimized in static situation slack validity and
operations are involved. In real operations, setup operations robustness in complex operations.
Naval Research Logistics DOI 10.1002/nav
Golmohammadi and Mansouri: Complexity and Workload Considerations 367
• Vital factors (e.g., sequence of operations) to product- [8] R.E. Fox and E.M. Goldratt, The race. North River Press, New
mix decisions were identified and discussed. We York 1986.
introduced these factors to define the complexity [9] L.D. Fredendall and B.R. Lea, Improving the product mix
heuristic in the theory of constraints, Intl J Prod Res 35 (1997),
level of operations. Current methods ignore these 1535–1544.
important factors and their impact on the throughput. [10] L.D. Fredendall, D. Ojha, and J.W. Patterson, Concerning
• A new algorithm was proposed for making product the theory of workload control, Eur J Oper Res, 201(2010),
mix decisions that captures the complexity of opera- 99–111.
tions and capacity shortages to increase the likelihood [11] E.M. Goldratt, The Haystack syndrome, North River Press,
Croton-on-Hudson, NY, 1990.
of realizing actual throughput during implementation.
[12] T.C. Hsu and S.H. Chung, The TOC-based algorithm for
• Empirical evidence based on a wide range of prob- solving product mix problems, Prod Plann Control 9 (1998),
lem scenarios was provided in support of the outper- 36–46.
formance of the COLOMAPS algorithm in problem [13] W. Kelton, R. Sadowski, and N. Swets, Simulation with Arena,
sets involving setup operations. The aforementioned 5th ed, McGraw-Hill, 2009.
contribution refers to our approach for the experimen- [14] A.R. Komijan, M.B. Aryanezhad, and A. Makui, A new
heuristic approach to solve product mix problems in a multi-
tal design and the developed model of complexity bottleneck system, J Indus Eng Int l5 (2009), 46–57.
computation. [15] C. Koulamas, and S.S. Panwalkar. A note on combined job
selection and sequencing problems, Naval Res Log 60 (2013),
449–453.
The current research can be extended in a number of [16] P. Kouvelis and Z. Tian, Flexible capacity investments and
ways. Development of analytical models for product-mix product mix: Optimal decisions and value of postponement
decisions considering the complex nature of the problem options, Prod Oper Manage, 23(2014), 861–876.
[17] T.N. Lee and G. Plenert, Optimizing theory of constraints
calls for further research to advance current theories and when new product alternatives exist. Production and Inventory
solution techniques. Real and complex cases with different Manage J 34 (1993), 51–57
levels of complexities may also enhance the contributions of [18] K. Lee, L. Lei, and M. Pinedo. Production scheduling with
this research. The instances wherein one of the two factors, history-dependent setup times, Naval Res Log 59 (2012),
that is, complexity of operations and capacity shortage is the 58–68.
[19] A. Linhares, Theory of constraints and the combinatorial com-
main source of complexity needs further research. In these plexity of the product-mix decisions, Intl J Prod Econ 121
instances, we face a skewed situation wherein the complexity (2009), 121–129.
is introduced by one factor. For instance, it is interesting to [20] R. Luebbe and B. Finch, Theory of constraints and linear
examine the dynamic throughputs in situations where com- programming: A comparison, Intl J Prod Res 30 (1992),
plexity is mainly due to the involvement of several setup 1471–1478.
operations rather than the level of capacity shortage or the [21] C.J. Mayday, Proper use of constraint management, Prod
Inventory Manage J 35 (1994), 84.
sequence of operations. [22] N. Mishra, M. Tiwari, R. Shankar, and F. Chan, Hybrid tabu-
simulated annealing based approach to solve multi-constraint
REFERENCES product mix decision problem, Expert Syst Appl 29 (2005),
446–454.
[1] M.B. Aryanezhad and A.R. Komijan, An improved algorithm [23] G.C. Onwubolu, Tabu search-based algorithm for the TOC
for optimising product mix under the Theory of Constraints, product mix decision, Intl J Prod Res 39 (2001), 2065–2076.
Intl J Prod Res 42 (2004), 4221–4233. [24] G.C. Onwubolu and M.A. Mutingi, Optimising the multi-
[2] J.B. Atwater and S.S. Chakravorty, A study of the utilization of ple constrained resources product mix problem using genetic
capacity constrained resourcesin Drum Buffer- Rope systems, algorithms, Intl J Prod Res 39 (2001a), 1897–1910.
Prod Oper Manage 11 (2002), 259–273. [25] G.C. Onwubolu and M.A. Mutingi, Genetic algorithm
[3] S.A. Badri, M. Ghazanfari, and K. Shahanaghi, A multi- approach to the theory of constraints product mix problems,
criteria decision-making approach to solve the product mix Prod Plann Control 12 (2001b), 21–27.
problem with interval parameters based on the theory of con- [26] J. Pallant, SPSS survival manual: A step by step guide to data
straints, International Journal of Advanced Manufacturing analysis using SPSS, 4th ed. McGraw Hill, England, 2010.
Technology 70 (2014), 1073–1080. [27] M.C. Patterson, The product-mix decision: a comparison of
[4] J. Balakrishnan, and C.H. Cheng, Theory of constraints and theory of constraints and labor-based management accounting,
linear programming: A reexamination, International Journal Prod Inventory Manage J 33 (1992), 80–85.
of Production Research 38 (2000), 1459–1463. [28] G. Plenert, Optimized theory of constraints when multi-
[5] A. Bhattacharya, P. Vasant, B. Sarkar, and S.K. Mukherjee, A ple constrained resources exist, Eur J Oper Res 70 (1993),
fully fuzzified, intelligent theory-of-constraints product-mix 126–133.
decision, Intl J Prod Res 46(2008), 789–815. [29] A.J. Posnack, Theory of constraints: improper applications
[6] A. Bhattacharya and P. Vasant, Soft-sensing of level of sat- yield improper conclusions, Prod Inventory Manage J 35
isfaction in TOC product-mix decision heuristic using robust (1994), 85–86.
fuzzy-LP, Eur J Oper Res 177(2007), 55–70 [30] D. Quadt and H. Kuhn, Capacitated lot-sizing and schedul-
[7] J. Cohen, Statistical power analysis for the behavioral sciences, ing with parallel machines, back-orders, and setup carry-over,
Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1988. Naval Res Log 56 (2009), 366–384.
[31] Singh, S. Kumar and M.K. Tiwari, Psycho-clonal based [34] B.G. Tabachnick and L.S. Fidell, Using multivariate statistics,
approach to solve a TOC product mix decision problem, Intl J 5th ed. Pearson, Boston, 2007.
Adv Manufact Technol 29 (2006), 1194–1202. [35] J.Q. Wang, S.D. Sun, S.B. Si, and H.A. Yang, Theory of Con-
[32] V.A. Sobreiro and M.S. Nagano, A review and evaluation straints product mix optimisation based on immune algorithm,
on constructive heuristics to optimise product mix based on Intl J Prod Res 47 (2009), 4521–4543
the Theory of Constraints, Intl J Prod Res 50 (2012), 5936– [36] B. Zheng, Y.C. Gao, and Y. Wang, The product-mix optimiza-
5948. tion with outside processing based on theory of constraints
[33] M.L. Spearman, On the theory of constraints and the goal oriented cloud manufacturing, Appl Mech Mater 121–126
system, Prod Oper Manage 6 (1997), 28–33 (2011), 1306–1310.
Product A Product C
Product Processing Time Setup Time Product Processing Time
Routing (Mean, Standard Deviation) (Mean,StandardDeviation) Routing (Mean, Standard Deviation) Setup Time
8↓ Normal (0.5, 0.14) Gamma (20,5) 9 Normal (0.5, 0.13) Gamma (16,7)
7 Normal (0.5, 0.19) Gamma (122,24) 12 Normal (1.35, 0.28) Gamma (22,9)
7 Normal (1.5, 0.23) Gamma (115,36) 4 Normal (0.5, 0.19) Gamma (30,11)
3 Normal (1, 0.25) Gamma (27,5) 6 Normal (0.85, 0.23) Gamma (47,22)
5 Normal (0.5, 0.11) Gamma (32,8) 6 Normal (0.75, 0.31) Gamma (42,19)
3 Normal (0.5, 0.21) Gamma (37,15) 12 Normal (2, 0.71) Gamma (48,24)
5 Normal (0.5, 0.12) Gamma (26,11) 16 Normal (0.3, 0.1) Gamma (28,12)
4 Normal (0.5, 0.21) Gamma (33,14) 6 Normal (1, 0.39) Gamma (46,18)
4 Normal (0.5, 0.17) Gamma (20,5) 12 Normal (1.4, 0.17) Gamma (22,9)
5 Normal (0.5, 0.22) Gamma (20,5) 13 Normal (4, 1.6) Gamma (32,15)
2 Normal (0.4, 0.11) Gamma (20,5) 13 Normal (5,1.9) Gamma (29,13)
2 Normal (0.5, 0.15) Gamma (20,5) 14 Normal (2, 0.8) Gamma (126,46)
7 Normal (1, 0.27) Gamma (132,29) 15 Normal (1, 0.37) Gamma (64,34)
7 Normal (1, 0.32) Gamma (112,44) 16 Normal (0.3, 0.12) Gamma (35,10)
Product B
B1 B2
Product Processing Time Product Processing Time
Routing (Mean, Standard Deviation) Setup Time Routing (Mean, Standard Deviation) Setup Time
1↓ Normal (0.25, 0.11) Gamma (22,6) 9 Normal (0.5, 0.12) Gamma (17,5)
5 Normal (0.3, 0.16) Gamma (33,10) 12 Normal (1.25, 0.26) Gamma (22,8)
6 Normal (0.5, 0.15) Gamma (51,19) 4 Normal (0.4, 0.15) Gamma (35,7)
5 Normal (1, 0.21) Gamma (30,8)
5 Normal (0.45, 0.19) Gamma (30,14)
4 Normal (0.3, 0.14) Gamma (45,17)
6 Normal (0.5, 0.15) Gamma (80,13)
5 Normal (0.55, 0.13) Gamma (30,5)
4 Normal (0.3, 0.11) Gamma (32,9)
3 Normal (1, 0.27) Gamma (28,11)
1 Normal (1.05, 0.31) Gamma (33,14)
10 Normal (0.5, 0.17) Gamma (55,16)
11 Normal (3, 0.51) Gamma (31,10)
Product D Product E
Product Processing Time Product Routing Processing Time
Routing (Mean, Standard Deviation) Setup Time Routing (Mean, Standard Deviation) Setup Time
1 Normal (1, 0.11) Gamma (14,3) 1 Normal (0.5, 0.1) Gamma (13,3)
2 Normal (2, 0.13) Gamma (15,6) 4 Normal (0.5, 0.1) Gamma (20,5)
4 Normal (0.3, 0.09) Gamma (20,10) 5 Normal (0.5, 0.06) Gamma (35,10)
5 Normal (1, 0.23) Gamma (30,12) 4 Normal (1, 0.08) Gamma (37,20)
4 Normal (0.2, 0.11) Gamma (22,5) 5 Normal (0.5, 0.1) Gamma (32,10)
5 Normal (1, 0.21) Gamma (28,4) 4 Normal (0.5, 0.1) Gamma (20,9)
7 Normal (0.75, 0.12) Gamma (31,8) 6 Normal (1, 0.12) Gamma (30,8)
8 Normal (2, 0.29) Gamma (15,3) 7 Normal (1, 0.09) Gamma (15,5)
13 Normal (1, 0.18) Gamma (21,6)