Manufacturing Planning

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Complexity and Workload Considerations in Product Mix Decisions under the

Theory of Constraints
Davood Golmohammadi,1 S. Afshin Mansouri2

1
University of Massachusetts Boston, Management Science and Information Systems, Boston, Massachusetts

2
Brunel University London, Brunel Business School, Uxbridge, Middlesex, UB8 3PH, United Kingdom

Received 1 July 2014; revised 14 May 2015; accepted 15 May 2015


DOI 10.1002/nav.21632
Published online 9 June 2015 in Wiley Online Library (wileyonlinelibrary.com).

Abstract: The literature on the product mix decision (or master production scheduling) under the Theory of Constraints (TOC),
which was developed in the past two decades, has addressed this problem as a static operational decision. Consequently, the devel-
oped solution techniques do not consider the system’s dynamism and the associated challenges arising from the complexity of
operations during the implementation of master production schedules. This paper aims to address this gap by developing a new
heuristic approach for master production scheduling under the TOC philosophy that considers the main operational factors that
influence actual throughput after implementation of the detailed schedule. We examine the validity of the proposed heuristic by
comparison to Integer Linear Programming and two heuristics in a wide range of scenarios using simulation modelling. Statistical
analyses indicate that the new algorithm leads to significantly enhanced performance during implementation for problems with
setup times. The findings show that the bottleneck identification approach in current methods in the TOC literature is not effective
and accurate for complex operations in real-world job shop systems. This study contributes to the literature on master production
scheduling and product mix decisions by enhancing the likelihood of achieving anticipated throughput during the implementation
of the detailed schedule. © 2015 Wiley Periodicals, Inc. Naval Research Logistics 62: 357–369, 2015

Keywords: job-shop operations; theory of constraints; product mix decisions; master production schedule

1. INTRODUCTION versus dynamic in this study, lets explain it with an exam-


ple. Consider a linear programming approach to determine
In operations and production systems with limited the optimal number of products for production. The actual
resources and capacity, one of the major managerial deci- throughput of the operations is not likely to be identical to the
sions is the product mix optimization problem: the type original optimal production plan. The role of factors in sched-
and quantity of products to produce. It is one of the main uling such as WIP, queues, setup time, sequence of operations
management challenges in a production system, from the are ignored, and the results are a static plan. For instance,
strategic level to detailed scheduling. It has a direct impact delays in starting the operations schedule mean that there will
on several elements of the manufacturing enterprise’s perfor- be blockage and starvation of resources that is not considered
mance including profit, work-in-process (WIP), and customer in the static schedule. In practice, such an approach cannot
service. capture the challenges of a dynamic situation. After schedule
The product mix heuristic generates a master produc- implementation, we would expect to see more deviation from
tion schedule (MPS), the goal of which is to maximize the the optimal solution in a static situation in complex opera-
firm’s net profitability. Optimizing utilization of resources to tions. For example, different parts of one product may need
achieve the maximum throughput and profit, in a static situ- to use a bottleneck machine or processing and setup times
ation, may not be sufficient to address the challenges of real for the same operation of different products may differ. We
operations in a dynamic environment. Dynamic operations will discuss these points further in the context of the Theory
require careful consideration of vital factors that influence of Constraints (TOC).
the complexity of operations. To clarify the concept of static The TOC is an operations philosophy that focuses on
constraints in a system to improve its overall throughput,
Correspondence to: Afshin Mansouri (Afshin.Mansouri@brunel. and it has drawn the attention of scholars addressing the
ac.uk)

© 2015 Wiley Periodicals, Inc.


358 Naval Research Logistics, Vol. 62 (2015)

Figure 1. A summary of the research steps.

product mix problem [1, 5, 6, 9, 10, 12, 14, 17, 20, 22, 27, 31– We investigate the effectiveness and drawbacks of the most
33], 35, 36]. common algorithms, and demonstrate some of the funda-
TOC was first applied in production planning and sched- mental factors that they do not take into account. We argue
uling, and several algorithms to determine an optimized that the common method of identifying constraints, which
MPS have been developed based on TOC. However, most is based on the difference between available and required
of these algorithms are validated based on simple examples capacities in a static situation, is not effective for dynamic
and may not be particularly effective for large-scale and real- operations.
world operations in a job-shop system. Linhares [19] has We develop a novel algorithm called COLOMAPS: COm-
criticized the current algorithms and claimed that an effec- plexity and LOad driven MAster Production Scheduling. In
tive and optimum heuristic is simply impossible. Existing this algorithm, we define two operational factors: complexity
algorithms ignore the significant impact of inherent random- of operations and capacity shortage. Complexity is a function
of the number of products using a machine and the num-
ness in process times, which contributes to delays, inventory
ber of times a machine is needed by all products. Capacity
accumulation, idle time, and underutilization. To the best
shortage is the ratio of available capacity to required capac-
of our knowledge, no prior study in the TOC literature has
ity. This research defines the complexity level of operations.
addressed product mix decisions in the presence of con-
We consider capacity shortage and complexity as the two
straints or the impact of complex and dynamic operations main factors for the generation of data sets. Experimental
on actual throughput. design, simulation modeling, and statistical techniques are
Moreover, many instances in the literature benchmark the used for performance evaluation. We evaluate its performance
performance of solution techniques based on the throughputs by implementing it in complex case study operations and by
of small problems involving simple production flow. Such comparing it to other methods in the literature.
simplified cases do not capture the complexity of real-world The article is organized as follows. Section 2 reviews
operations such as the sequence of operations [15]. Therefore, the literature, Section 3 describes the research methodology,
the basic question for production planners is: which master Section 4 characterizes the algorithm, Section 5 presents the
production schedule is most effective? method of implementation and experimental design, Section
In summary, our main research questions in the context of 6 presents statistical analysis and discussion, and Section 7
TOC are: reviews contributions and concludes. A summary of research
steps is shown in Figure 1.

1. What are the key factors that impact the identification


of constraints in dynamic and complex operations?
2. LITERATURE REVIEW
2. How accurate and effective are the current
approaches for constraints identification in the TOC? Although detailed job-shop scheduling has received sig-
3. How realistic are the current algorithms or optimized nificant attention in the literature [10, 30, 18], there is little
solutions in a dynamic system and why? in-depth exploration of the influence of product mix decisions
Naval Research Logistics DOI 10.1002/nav
Golmohammadi and Mansouri: Complexity and Workload Considerations 359

on detailed scheduling during implementation in the TOC system constraints in the first step. In the next step they pro-
approach. MPS planning under TOC is developed based on its vide a detailed heuristic approach to exploit these system
first two principles: (1) to identify the system’s constraint(s) constraints. Sobreiro and Nagano [32] discuss shortcomings
and (2) to determine how to exploit them. These principles are of prior solution techniques in failing to handle situations in
well illustrated by several studies [9, 11, 27]. Several algo- which multiple constraints exist in the system. To address
rithms have been developed based on TOC to determine an this gap, they propose a heuristic that first identifies the dom-
optimized MPS. Some of the algorithms [11, 17, 20, 27] are inant bottleneck and then develops an initial solution. The
proven to find the optimal solution only in examples with solution is improved in the next step by a greedy neighbour-
a single constraint. In some cases, the results are inefficient
hood search. However, the current algorithms in the TOC
and produce nonoptimal or infeasible solutions when cer-
approach, including Fredendall and Lea [7] and Sobereiro
tain new product alternatives are available. Posnack [29] and
and Nagano [32], miss the factors which impact the identi-
Maday [21] argue that the TOC approach should be properly
used, and for noninteger solutions, partial products should fication of the real bottleneck during operations, especially
be allowed to be manufactured in the next planning horizon. complex operations.
Others have noted that problem sarise when there are multiple We address these draw backs in the proposed algorithm
constraints, as the TOC approach might generate an optimal (COLOMAPS) and discuss the impact of key drivers for an
solution [17, 20] or an infeasible solution [4, 28]. effective MPS.
Other scholars have developed improved algorithms
[1, 3, 9, 12, 14, 22, 31, 35, 36], but drawbacks still exist in 3. RESEARCH METHODOLOGY
their methods. Primarily, most of these algorithms are val-
idated based on simple examples and may not effectively Identifying constraints, which requires meticulous analy-
scale up to large-scale or real-world operations in a job-shop sis, may not be easy in real-world job-shop operations. We
system with complex operations scheduling. Moreover, the develop a novel algorithm called COLOMAPS to generate
bottleneck identification steps used by existing algorithms an MPS under the TOC approach. The algorithm has three
do not incorporate other main factors influencing operations, phases: identification of the constraints, initial MPS devel-
such as the sequence of processes and the hidden role of opment, and improvement of the initial MPS. We propose a
queues. Finally, the performance and accuracy of existing novel approach for the identification of constraints and define
methods have been evaluated with linear optimization results two main operational factors: complexity of operations and
in a static environment. Dynamism of operations can affect capacity utilization.
all of the developed plans and add to the challenge of plan- To conduct a comprehensive study, we applied the
ning. Linhares [19] has criticized the current algorithms and COLOMAPS algorithm to a production line inspired by a real
provided a good overview of them. case in the auto industry. This is a compelling case because it
Other scholars have used intelligent search algorithms, is similar to most job-shop systems in real-world operations
such as the Tabu search (TS) [23], genetic algorithms (GA) and it contains a high level of complexity. Additionally, we
[24, 25], and a hybrid Tabu simulated annealing approach investigated two of the most complex instances of job-shop
[22], to address large-scale problems as one of the shortcom- operations in the literature [2, 12].
ings of common methods. However, these intelligent search To determine the performance of the COLOMAPS algo-
approaches have over looked the explicit process of TOC rithm, we created MPS’s for these job-shop operations using
philosophy and thus would incur low convergence efficiency our new algorithm, and using one of the most commonly
to local optima or, in some instances, provide infeasible applied heuristic algorithms [9], the recent algorithm of
solutions [35]. Sobreiro and Nagano [32] provide a review Sobreiro and Nagano [32], and Integer Linear Programming
and evaluation of constructive heuristics to optimize prod- (ILP) as benchmarks.
uct mix decisions based on TOC and emphasize the lack of A set of experiments was designed to verify the effec-
applicability of common algorithms in practice. tiveness of the COLOMAPS algorithm in comparison with
Two of the most commonly applied heuristic algorithms the aforementioned benchmark algorithms in a wide range
([9] and the recent algorithm of Sobreiro and Nagano [32]) of problem sets in static and dynamic operations. Then, the
are used as benchmarks of our proposed algorithm. Both Fre- results of the MPS development based on all four methods
dendall and Lea [7] and Sobereiro and Nagano [32] address were evaluated through simulation modeling. Ultimately, sta-
the problem as a static product mixed decision model and try tistical analysis was conducted to determine whether there
to identify the real bottleneck where it is not easily identified. are significant differences between the performance of the
Fredendall and Lea [7] propose an approach to identifying COLOMAPS algorithm and the benchmark approaches.
Naval Research Logistics DOI 10.1002/nav
360 Naval Research Logistics, Vol. 62 (2015)

4. THE COLOMAPS ALGORITHM Parameters


To enhance the current practice of MPS development in the • CM i : contribution margin of product i (sale price
presence of constraints in product mix decisions, we develop minus raw materials costs)
a heuristic approach (COLOMPAS) that aims to maximize • Di : demand for product i


the expected throughput during the implementation of the • D : demand vector for all products
detailed schedule. The performance of common algorithms • BMk : k th bottleneck machine
for MPS development under the TOC approach is evalu- • CL: critical machine
ated based on the theoretical throughput (static situation), but • PRij : priority index of product i on machine j
not after implementation, that is, considering the dynamic of • t i,j : unit process time of product i on machine j
operations. • tj’ : remaining time (capacity) on machine j
The COLOMAPS algorithm involves three phases. We • C j : available capacity of machine j
emphasize that all resources may play the role of constraints, • αj : degree of operational complexity for machine j;
even those with excess capacity to meet demand. There- 0 ≤ αj ≤ 1
fore, our bottleneck identification differs from the general • βj : degree of capacity constraint (being bottleneck)
approach in the TOC literature; that is, we do not consider for machine j; 0 ≤ βj ≤ 1
that resources with negative available capacities (i.e., the • γj : degree of criticality of machine j; 0 ≤ γj ≤ 1
resources whose demands exceed their capacities) in a static • δg,r : The impact of the tradeoff between products
situation are automatically bottlenecks. g (giver) and r (receiver) on throughput. A value
All resources are assigned a score that exposes their poten- more (or less) than ‘1’ indicates positive (or negative)
tial to become a constraint. A higher score means that the impact on throughput whilst a value of ‘0’ indicates
resource can have more impact on the operations as a bottle- a null impact.
neck. The machine with the highest score is considered the
primary bottleneck. As mentioned before, in this algorithm,
we define two operational factors: complexity of operations 4.2. Phase 1: Identification of the Critical Machine
and capacity shortage. Complexity is a function of the num- In this phase, machines are sorted based on their degree of
ber of products using a machine and the number of times a criticality. The first machine in this order will be considered
machine is needed by all products. Capacity shortage is the the “critical machine”(CL). This will be done as follows:
ratio of available capacity to required capacity. The combi-
nation of complexity and capacity shortage is considered the a. Calculate the degree of operational complexity for
main criterion to identify bottlenecks. Details of the algorithm machines (αj ) using the following formula:
are provided as follows:
αj = ψj + ξj , j = 1, . . . , m. (1)
4.1. Notation
where value denotes normalized value and is
Indexes calculated as follows:
• i, v, u, g, r: indexes for products; i, v, u, g, r = 1,...,n value − min
• j, k, : indexes for machines; j, k,  = 1,...,m value = (2)
max − min
• Pi : product i
• M j : machine j b. Calculate the percentage of capacity difference
( Cj ) for all resources as follows:
Variables  n 
Cj − i=1 ti,j · Di
Cj = × 100; j = 1, . . . , m.
• xij =1 if product i requires machine j; 0 otherwise cj
• ξj : number of products that need machine j;ξj = (3)
n
i=1 xij
• nij : number of times machine j is needed for process- The resource with Cj ≤ 0 will be considered a
ing product i; nij ≥ 0 “bottleneck.”
• ψj : number of times machine j is  required for c. Calculate the degree of capacity constraint (βj ) for
processing a unit of all products;ψj = ni=1 nij all machines using the following formula:
• Qi : production quantity of product i
−→
• Q : vector of the production quantities βj =  − Cj  (4)

Naval Research Logistics DOI 10.1002/nav


Golmohammadi and Mansouri: Complexity and Workload Considerations 361

In whichvalue represents normalized value and is the above order. Make sure the MPS is fea-
calculated using Eq. (2). sible at all stages, that is, there is always
d. Calculate the degree of criticality (γj ) for all enough capacity on all the critical machines
machines as follows: identified in step (a).
f.3. Calculate the total  throughput (TP) of the
γj = w1 .αj + w2 .βj (5) initial MPS: TP = ni=1 Pi × CMi .

where w1 and w2 are weights of αj and βj , which


4.4. Phase 3: Improve the Initial MPS
need to be set in the interval [0, 1] such that:
The initial MPS is improved through the step-wise proce-
w1 + w2 = 1. (6) dure outlined below:

e. Sort the machines in descending order of γj . Con- g. Examine whether the total throughput can be
sider the first machine the CL along with non- improved by trading off production quantities (Pi ) of
dominated bottlenecks for further steps. A bottle- products ranked higher with those ranked lower with
neck machine k is considered dominated by another respect to CL. This idea is proposed intuitively, based
bottleneck machine  (represented Mk ≺ Ml ) if on the assumption that higher-ranked (i.e., more
profitable) products cause more complexity during
Ck ≥ C , and (7) implementation. Therefore trading off production
tv,k ≤ tv, ∀v ∈ {1, . . . , n} ∧ ∃ u ∈ {1, . . . , n}|tu,k < tu, . quantities of higher-ranked with lower-ranked prod-
(8) ucts whilst keeping the same throughput is expected
to lead to a more robust solution. In other words, we
Bottleneck machines that are not dominated by others will conjecture that for a given throughput, the robust-
constitute the set of nondominated bottlenecks. These are −

ness of the product mix vector Q , in which products
active constraints whose loads need to be carefully moni- are ranked according to their R ratio, will increase
tored to ensure the feasibility of the MPS at any given stage. by trading production quantities from higher-ranked
Adapted from the Multiobjective Optimization literature, the to lower-ranked products. To improve robustness,
following lemma supports the above by reducing the num- examine potential trade-offs between P[u] and P[v]
ber of bottleneck machines against which the feasibility of where [] represents the order of products in the
solutions needs to be checked. sequence for u = 1,...,n−1 and v=u+1,...,n. Accept

→ the tradeoffs that improve throughput. To identify
LEMMA 1: If product mix vector Q is feasible for a non- advantageous tradeoffs, compare the contribution


dominated machine  under the demand vector D , then it is margin of the two products on the primary bottle-
feasible for all machines that are dominated by . neck or BN 1 (i.e., the machine whose capacity runs
out first). Note that in such comparisons it is impor-
PROOF: The proof can be derived from the definition of tant to keep the order of products developed in step
dominated machines in Eqs. (7) and (8).  f. Then take the following steps:
g.1. Calculate P Ri,BN1 ratios for all products
where P Ri,BN1 = CMi /ti,BN1 . This ratio
4.3. Phase 2: Initial MPS Design
represents the priority index of products
An initial MPS is developed based on the CL and bottle- based on the primary bottleneck that runs out
necksin the following steps: of capacity first (and in many cases could be
different from the CL).
f. Consider the first machine as the CL; develop a fea- g.2. Consider the products whose demands are
sible MPS through the following procedure which is not fully satisfied as potential receivers. Let

partly inspired by the algorithm of Fredendall and tBN 1
denote the remaining time on the pri-
Lea [7] (referred to as FL97): mary bottleneck (BN 1 ); that is, the machine
f.1. Sequence products in nondecreasing order of whose capacity is exhausted or insufficient
Ri ratio where Ri = CMi /ti,CL . In the event to produce further units of any product. In
of a tie, give priority to the product with the the case of a tie, consider the machine with
higher CM i . a higher degree of criticality (γ ) the pri-
f.2. Develop the initial feasible MPS by allocat- mary bottleneck. Determine whether there
ing the market demand of products (Pi ) in are other products higher on the list (i.e., with
Naval Research Logistics DOI 10.1002/nav
362 Naval Research Logistics, Vol. 62 (2015)

higher priority) and consider them ‘givers’. Table 1. Operations routings of the products.
A tradeoff can be justified between products Products Operations routings (Machine’s code)
g (giver) and r (receiver) if it has a positive
impact on throughput, that is, δr,g ≥ 1: A 8,7,7,3,5,3,5,4,4,5,2,2,7,7
B1 1,5,6,5,5,4,6,5,4,3,1,10,11
B2 9,12,4,11
P Rr,BN1 (t BN1 + tg,BN1 )
δg,r = > 1. (9) C 9,12,4,6,6,12,16,6,12,13,13,14,15,16
CMg D 1,2,4,5,4,5,7,8,13
E 1,4,5,4,5,4,6,7
LEMMA 2: Trading production quanti-
ties of product g to product r (in the feasible
region) has a positive (or negative) impact on parts based on a job-shop system. We selected the opera-
throughput if δg,r > 0(δg,r < 0). Tradeoffs tions of five products: A, B, C, D, and E, with 2,000, 5,000,
with δg,r = 0 have no effect on throughput. 2,000, 3,000, and 3,000 units in demand respectively using a
one-month planning horizon. Product B consisted of two raw
PROOF: The proof can be derived by materials, B1 and B2. There were 16 different machines with
expanding Eq. (9).  just one of each type available for operations. Table 1 shows
the operations routings of the products. The processing times
g.3. Identify the most rewarding pair for tradeoff and capacities are presented in the Appendix (Tables 7–10).
between products g and r as follows: Table 10 presents the demand and available capacity of each
machine and the marginal profit for each product.
We selected this case study because it has a high degree of
complexity and is similar to most real-worldjob-shop systems
in several ways, including realistic setup times and differ-
ent processing times. The complexities of the case can be
summarized as follows:

g.4. Complete the tradeoff between products g • The difference between the available and required
and r step-by-step by decreasing the produc- capacity for some of the resources (machines) was
tion quantity of g one by one and exploiting not significant (this may shift the bottlenecks);
the released capacity to increase the quantity • Different parts of one product need to use the con-
of r as long as the resultant MPS remains fea- straint;
sible. Stop when no further exchange can be • One constraint feeds another constraint;
made. • The sequence of operations showed that most of the
g.5. Identify the new primary bottleneck (BN 1 ) machines (constraint and nonconstraint) were used by
and go back to step g.1 until no further most of the products; and
rewarding tradeoff can be found. • Processing and setup times were different for the same
h. Report the resultant MPS as the final solution. operation of different products.

The details of two other cases in the literature are dis-


5. IMPLEMENTATION cussed by Atwater and Chakravorty [2] and Hsu and Chung
[12]. The literature does not contain examples with the high
In the following, the case problems are reviewed and the level of complexity of our case study; therefore, these cases
experimental design for MPS development is illustrated. represent the most complex examples that we could identify
in the literature. The proposed and benchmark algorithms are
used to generate the MPS based on each case study operations
5.1. The Case Problems
for further evaluation and analysis.
Three job-shop production systems were considered for
implementation and analysis in this study: a complex case
5.2. Problem Generation
inspired from the auto industry that operates based on a job-
shop system and two of the most complex job-shop operations A set of experiments was designed to verify the effective-
reported in the literature by Hsu and Chung [12] and Atwa- ness of the COLOMAPS algorithm in comparison with the
ter and Chakravorty [2]. In our case study, an automotive algorithms of Fredendall and Lea [7] denoted by FL97, the
parts manufacturer is involved in producing especial exhaust algorithm of Sobreiro and Nagano [32] denoted by SN12,
Naval Research Logistics DOI 10.1002/nav
Golmohammadi and Mansouri: Complexity and Workload Considerations 363

Table 2. Complexity assignment based on operations flow (Pa ). Table 4. Distribution of test problems (with and without setup).

ξj ≤ n n < ξj ≤ 2n ξj > 2n Complexity level


Pa n 2n 3n Low High
Low “8 test problems in “8 test problems in
region 1” region 2”
Table 3. Complexity assignment based on number of products Capacity HsuC98 (2) HsuC98 (3)
(Pb ). shortage AtwaterC02 (3) AtwaterC02 (2)
0 < ψj n/4 < ψj n/2 < ψj 3n/4 < ψj level Case Study (3) Case Study (3)
≤ n/4 ≤ n/2 ≤ 3n/4 ≤n High “8 test problems in “8 test problems in
Pb n/4 n/2 3n/4 n region 3” region 4”
HsuC98 (3) HsuC98 (2)
AtwaterC02 (3) AtwaterC02(3)
Case study (2) Case study (3)

and ILP in a wide range of problem sets. To evaluate the


effectiveness of the COLOMAPS algorithm and to find out
if it produces significantly different results in a wide range Each problem was solved using the COLOMAPS algo-
of problem instances, we generated a set of different opera- rithm as well as the three benchmark solution approaches,
tions scenarios based on the case problems. We considered FL97, SN12 and ILP. The COLOMAPS algorithm and the
capacity shortage and complexity as the two main factors for two heuristic algorithms, that is, FL97 and SN12, were coded
the generation of data sets. Each factor was considered at in C++.1 To set the values of w1 and w2 for the COLOMAPS
two levels: low and high. A problem was considered high in algorithm, a wide range of alternative values was examined
terms of capacity shortage if more than half of its machines for w1 from 0.1 (w2 = 0.9) to 0.9 (w2 =0. 1)in the case study
were under capacity in responding to market demand. The problem. The best performance was observed at w1 =w2 =0.5,
level of complexity was determined based on the number of so we used these values for the experiments. Excel Solver
complex machines, that is, those involved with complex oper- was used to solve the ILP models. The resultant through
ations. Two main factors contribute to the complexity of the puts of the COLOMAPS algorithm and benchmarks were
operations assigned to a machine [8]. For a given machine j, compared in two modes: static (as generated by solution
these include the complexity of flow, or the number of times approaches) and dynamic (as observed during simulation).
a machine is needed by all products (denoted by Pa as a func- A paired comparison represented the deviation of the per-
tion of ψj ), and the number of products that need the machine formance of the COLOMAPS algorithm from that of each
(represented by Pb as a function of ξj ).For a problem with n benchmark technique.
products, we define Pa and Pb based on intervals for ξ and The production lines described in Sec. 5 were coded into
ψ as shown in Tables 2 and 3, respectively. ARENA 13 and based on generating scenarios in the experi-
A machine was considered complex (i.e., dealing with mental design section (Sec. 6.1); each scenario was simulated
complex operations) if its overall assigned complexity (Pa + to determine the throughputs of each MPS. ARENA software
Pb ) was greater than or equal to 1.5n. The complexity of a allowed for graphic model verification, statistical analysis,
problem was considered high if more than half of its machines multiple scenarios, and varying input parameters. We ran the
were complex. simulation model for a period of 14,400 minutes (8 hours
Using the a forementioned classification, 32 test prob- per day for 30 days) for the case study (Case Study), 2,400
lems were generated using three base problems including minutes for job-shop system 1 (HsuC98) and 2,040 minutes
our case study operations and two widely used benchmark for job-shop system 3 (AtwaterC02). The animation capabil-
problems from the literature: HsuC98 (adapted from [12]), ity and verification techniques in ARENA ensured that the
AtwaterC02 (adapted from [2]) and ILP. These include eight simulation model correctly considered all the assumptions
test problems for each of the four combinations of complex- made and parameters set in all three systems. We assumed
ity and capacity shortage: Low–Low, Low–High, High–Low, that there were no defective parts or machine failure. During
and High–High. For instance, we changed the original oper- the period of operations and based on the available capacity
ations flow of HsuC98 twice (resulting intwo new operations of machines, demands were satisfied as much as possible.
flows) to fall in the category of Low Complexity and Low We ran the simulation models for the defined short period
Capacity Shortage. This is shown as HsuC98 (2) in region 1 of operations for each system and after that, operations were
of Table 4. The same approach was followed to generate all
32 problems in four regions. Table 4 shows the distribution 1
The C+ + code and data sets used in this paper are available in the
of the 32 base test problems in each region. following URL: https://github.com/samansouri/mps-toc.
Naval Research Logistics DOI 10.1002/nav
364 Naval Research Logistics, Vol. 62 (2015)

stopped. Following Kelton et al. [13],there was no need to • H1a : There is no difference between COLOMAPS
consider a warm-up period to reach a steady state in this and ILP.
case. This means that the designed models start out empty • H1b : There is no difference between COLOMAPS
of parts and all resources are ideal. This a terminating sys- and FL97.
tem situation, and no warm-up is needed to ignore the initial • H1c : There is no difference between COLOMAPS
conditions. and SN12.
The drum-buffer-rope technique was used for detailed • H1d : There is no difference between ILP and FL97.
scheduling in all the models. Moreover, the priority rules • H1e : There is no difference between ILP and SN12.
for operations on a machine were defined in a flexible man- • H1f : There is no difference between FL97 and SN12.
ner to maximize throughput. Products based on their mar-
ginal contributions had priorities, but if there was a product Paired sample t-Tests [34] were conducted using SPSS [26]
at the final stage of operations, the original priority rule to evaluate these hypotheses in terms of resultant throughput
was overridden to minimize the work in process and maxi- (TP) in static mode. As mentioned before, setup requirements
mize the throughputs. All these rules were evaluated through did not affect the static performance of the algorithms (none
simulation modelling tests for verification purposes. of the algorithms consider setup as an input) so we confined
The main challenge was to determine the best values for the the study to 32 paired comparisons between the base test
scheduling variables. To determine the detailed schedule for problems. Table 5 displays the results of static comparisons
the best performance, we used Opt Quest optimization soft- on these problem sets. It should be noted that the fairly large
ware. Opt Quest automatically searches for optimal solutions magnitude of the standard deviations is due to the differ-
within ARENA simulation models. In other words, for each ence between the case study and the other two test problems.
MPS implementation, input variables for detailed schedul- Throughputs in the case study are in the scale of millions,
ing such as interarrival times between batches, release time whilst the others are in thousands.
for raw materials, and arrival batch sizes were manipulated The results indicate that in the static mode, both ILP
by Opt Quest to find the best values to enhance the through- and FL97 outperform the COLOMAPS (with P = 0.024 and
put. Opt Quest facilitated the challenge of finding an optimal 0.073, respectively) whilst there is no significant difference
or very satisfactory solution based on the best set of input between COLOMAPS and SN12. Moreover, ILP outperform
variables. The models of the three job-shop systems were FL97 (P = 0.092) and SN12 (P=0.048) but there is no signifi-
simulated for 1,000 runs with 30 iterations in each run. cant difference between FL97 and SN12. As a result, the null
hypotheses H1a , H1b ,c H1d , and H1e are rejected whilst H1c
and H1f are accepted.
Such results are predictable, as the COLOMAPS algo-
6. RESULTS, ANALYSIS, AND DISCUSSION
rithm, unlike the benchmark algorithms, does not seek to
Extensive statistical analyses were carried out to answer optimize static throughput by solving a simplified problem
the following two questions: scenario without considering the complexities that might
affect performance during implementation. Considering the
1. Are there significant differences between the per- heuristic nature of FL97 and SN12, the superiority of ILP is
formance of the COLOMAPS algorithm and the justifiable.
benchmark approaches in a static mode in terms of
throughput? 6.2. Dynamic Comparisons
2. Are there significant differences between the per-
To test the performance of the resultant MPS’s dur-
formance of the COLOMAPS algorithm and the
ing implementation in dynamic conditions, the following
benchmark approaches in a dynamic mode in terms
hypotheses are proposed with regards to the actual through-
of throughput?
puts:
These questions aim to test the merits of the COLOMAPS • H2a : There is no difference between COLOMAPS
algorithm as compared to the benchmark algorithms. These and ILP.
are examined in the following subsections. • H2b : There is no difference between COLOMAPS
and FL97.
• H2c : There is no difference between COLOMAPS
6.1. Static Comparisons
and SN12.
The following hypotheses were defined to assess the dif- • H2d : There is no difference between ILP and FL97.
ference in performance of the algorithms in static mode with • H2e : There is no difference between ILP and SN12.
respect to the static throughput: • H2f : There is no difference between FL97and SN12.
Naval Research Logistics DOI 10.1002/nav
Golmohammadi and Mansouri: Complexity and Workload Considerations 365

Table 5. Paired sample t-tests between static throughputs (N = 32).

Pair Mean Difference Std. Dev. t-value P-value


Pair 1 COLOMAPS—ILP −276,710** 660,055 −2.371 0.024
Pair 2 COLOMAPS—FL97 −213,576* 650,350 −1.858 0.073
Pair 3 COLOMAPS—SN12 35,889 720,298 .282 0.780
Pair 4 ILP—FL97 63,134* 205,643 1.737 0.092
Pair 5 ILP—SN12 312,599** 859,729 2.057 0.048
Pair 6 FL97—SN12 249,465 877,125 1.609 0.118
*P < 0.10; **P < 0.05

Table 6. Paired sample t-tests between dynamic throughputs (N = 32).

Pair Mean Difference Std. Dev. t-value P-value


No setup Pair 1 COLOMAPS—ILP −67,745 410,324 −0.934 0.358
Pair 2 COLOMAPS—FL97 100,977 337,801 1.691 0.101
Pair 3 COLOMAPS—SN12 432,056** 903,836 2.704 0.011
Pair 4 ILP—FL97 168,722** 416,744 2.290 0.029
Pair 5 ILP—SN12 499,802*** 993,557 2.846 0.008
Pair 6 FL97—SN12 331,080** 871,205 2.150 0.039

With setup Pair 7 COLOMAPS—ILP 206,320* 611,250 1.909 0.066


Pair 8 COLOMAPS—FL97 274,589*** 508,232 3.056 0.005
Pair 9 COLOMAPS—SN12 462,629*** 769,782 3.400 0.002
Pair 10 ILP—FL97 68,269 507,345 .761 0.452
Pair 11 ILP—SN12 256,309** 693,211 2.092 0.045
Pair 12 FL97—SN12 188,040** 505,610 2.104 0.044
*P < 0.10; **P < 0.05; ***P < 0.01

Unlike in the static mode, it was anticipated that setup was also revealed that the difference is partly influenced by
would affect actual throughputs during implementation. As the setup requirements of the problem. Such differences can
a result, paired comparisons were carried out on the 32 test be interpreted in light of the fact that setup impacts batching
problems in two categories: without setup and with setup. decisions made during detailed scheduling.
Table 6 represents the results of the paired t-Tests in the Statistical analysis indicates that the new algorithm leads to
dynamic mode using SPSS. significantly enhanced performance during implementation
As shown in Table 6, in the dynamic mode and in problems in dynamic operations.
without setup, COLOMAPS outperforms SN12 (P = 0. 011),
but the difference between COLOMAPS and the two other 6.3. Complexity of Dynamic Operations
benchmarks are not significant. In this category, ILP performs
Lets look at the role of setup operations and some of the
better than both FL97 (P = 0. 029) and SN12 (P = 0. 008) challenges in dynamic operations. Setup time can impact
whilst FL97 outperforms SN12 (P = 0. 039). Conversely and batch-size determination, priority rule of operations and
in problems with setup, COLOMAPS performs significantly untimely throughput. We faced a sequence-dependent setup
better than the benchmark algorithms, that is, compared with situation, which occurs when a machine’s setup time for a
ILP (P = 0.066), FL97 (P = 0.005), and SN12 (P = 0.002). particular job is determined by not only that job but also
In this group, no significant difference is observed between by the previous job that the machine is currently set up for.
ILP and FL97. Incidentally, both ILP and FL97 show better Moreover, factors such as delays in move times or delays
performance compared with SN12 (at P = 0.045 and 0.044, in starting the operations schedule means that there will be
respectively). Consequently, the null hypotheses H2a and H2b blockage and starvation of resources that is not considered
are accepted for problems without setup whereas the rest are in the static schedule. The size and complexity of the opera-
rejected for the same category of problems. Conversely, all tions grow rapidly as the number of jobs and machines in the
null hypotheses except for H2d are rejected for problems with problem increase.
setup. Conversely, the common batch-size rule in the TOC is to
These results support our conjecture that considering the consider batch size the same as demand to save time for
complexity and capacity shortages in product mix decisions working hours. However, in a complex job-shop system, a
positively affects real throughput during implementation. It reduction of the number of setups in a constraint may not
Naval Research Logistics DOI 10.1002/nav
366 Naval Research Logistics, Vol. 62 (2015)

be the proper solution for enhancing the capacity of the con- are an integral part of operations, hence methods with more
straint. The overall gain of throughput with small batch sizes, accuracy with regard to real throughputs are demanded.
along with multiple setups, does more than save time and
capacity in the constraint. This strategy can minimize the WIP
and enhance profit. The resource assignment priorities based 7. MANAGERIAL INSIGHTS AND CONCLUDING
on the CM should not necessarily be followed when reach- REMARKS
ing due dates. For example, in a real production environment,
We made a first attempt to address the complex nature of
there may be two products, X and Y, which are competing
product-mix decisions by taking into account factors (com-
to use the same constraint simultaneously. Following the pri-
plexity of operations and capacity shortage) previously over-
ority based on the CM in this example, product X should be
looked in the literature regarding MPS design under TOC.
processed first. However, after this operation, product X is
The simplistic interpretation of capacity shortage in a sta-
not complete and needs more operations, whereas assigning
tic mode and the omission of operations complexity in the
this resource to product Y, a product with low priority, leads
extant literature may have an adverse impact on operations
to the completion of product Y, which can then be shipped
performance during implementation. Complexity of oper-
to the customer. Thus, based on the goal of meeting the due
ations, such as the sequence of processes and the hidden
date, it is preferable to expedite the WIP to be processed and
role of queues, are largely absent within current concept and
shipped as soon as possible, even though this priority conflicts
modeling approaches.
with the original priority based on the CM. In other words,
A novel algorithm called COLOMAPS was developed and
while product X has priority over product Y, the remaining
compared with two benchmark algorithms and ILP in a wide
time is not adequate to complete the production of product X.
range of problem instances. The results indicated that the
The highest priority should be switched to product Y, which
COLOMAPS algorithm leads to better throughput during the
needs less time to reach the end of the production line.
execution of the MPS. Until now, the common perception of
validation, comparison, and effectiveness of MPS methods in
6.4. Approach Discussion the literature was based on static operations. Dynamic opera-
tions, especially in job-shop systems with inherently complex
From the inception of TOC, there was a debate about the natures, may lead to shifting bottlenecks. As a result, the
difference between the ILP and TOC throughputs, and the performance evaluation of any planning method should be
focus of computations [20, 4]. determined via actual throughput during the implementation
Luebbe and Finch [20] mentioned “Many believe that TOC of a production plan. We emphasize that the vision of sta-
does not contribute anything new because you can accom- tic optimization for evaluation and planning operations in a
plish virtually the same thing with linear programming.” Of dynamic environment should be re-evaluated. It was further
course the common argument is that TOC is a production shown that complexity of operations, level of capacity short-
philosophy and it is beyond the scope of an optimization age, and setup requirements are the three main factors that
technique such as linear programming. Although this is a influence dynamic performance.
reasonable and valid response to the argument, we would The contributions of this research are summarized as
like to discuss it briefly from another angle. The main moti- follows:
vation of this research was that the actual throughputs of a
method should be the basis of effectiveness and performance • The common method of identifying constraints,
in dynamic operations, not the throughputs in a static mode which is based on the difference between available
of operations. What management would like to see is the and required capacities in a static situation, is not
accuracy level between the MPS and the actual throughputs. effective for dynamic operations. These initial con-
Satisfying demand and meeting deadlines are challenging straints may not be real or effective constraints during
and crucial factors for success. From this point of view, the operations, especially during complex operations.
ILP approach may not be a solid and strong solution. Deter- • Current approaches based on optimization in dynamic
mination of the real bottleneck and of the hidden impact of environments are shown to lack accuracy. We demon-
other factors such as work in process, sequence of operations, strate that current concept and modelling approaches
and number of setups is out of scope of its computation. As do not attend adequately to the complexity of
a result, for real and complex operations, there is a differ- operations and recommend revisions in the vision of
ence between outcome and real throughputs. Other heuristic optimization for evaluation and planning operations
approaches have also overlooked this vital point and have in a dynamic environment.
proven their performance in a static mode of operations. • The current approach for evaluating MPS design tech-
The results analysis shows that our proposed approach per- niques was challenged and it was concluded that
forms well for dynamic operations, especially when setup models optimized in static situation slack validity and
operations are involved. In real operations, setup operations robustness in complex operations.
Naval Research Logistics DOI 10.1002/nav
Golmohammadi and Mansouri: Complexity and Workload Considerations 367

• Vital factors (e.g., sequence of operations) to product- [8] R.E. Fox and E.M. Goldratt, The race. North River Press, New
mix decisions were identified and discussed. We York 1986.
introduced these factors to define the complexity [9] L.D. Fredendall and B.R. Lea, Improving the product mix
heuristic in the theory of constraints, Intl J Prod Res 35 (1997),
level of operations. Current methods ignore these 1535–1544.
important factors and their impact on the throughput. [10] L.D. Fredendall, D. Ojha, and J.W. Patterson, Concerning
• A new algorithm was proposed for making product the theory of workload control, Eur J Oper Res, 201(2010),
mix decisions that captures the complexity of opera- 99–111.
tions and capacity shortages to increase the likelihood [11] E.M. Goldratt, The Haystack syndrome, North River Press,
Croton-on-Hudson, NY, 1990.
of realizing actual throughput during implementation.
[12] T.C. Hsu and S.H. Chung, The TOC-based algorithm for
• Empirical evidence based on a wide range of prob- solving product mix problems, Prod Plann Control 9 (1998),
lem scenarios was provided in support of the outper- 36–46.
formance of the COLOMAPS algorithm in problem [13] W. Kelton, R. Sadowski, and N. Swets, Simulation with Arena,
sets involving setup operations. The aforementioned 5th ed, McGraw-Hill, 2009.
contribution refers to our approach for the experimen- [14] A.R. Komijan, M.B. Aryanezhad, and A. Makui, A new
heuristic approach to solve product mix problems in a multi-
tal design and the developed model of complexity bottleneck system, J Indus Eng Int l5 (2009), 46–57.
computation. [15] C. Koulamas, and S.S. Panwalkar. A note on combined job
selection and sequencing problems, Naval Res Log 60 (2013),
449–453.
The current research can be extended in a number of [16] P. Kouvelis and Z. Tian, Flexible capacity investments and
ways. Development of analytical models for product-mix product mix: Optimal decisions and value of postponement
decisions considering the complex nature of the problem options, Prod Oper Manage, 23(2014), 861–876.
[17] T.N. Lee and G. Plenert, Optimizing theory of constraints
calls for further research to advance current theories and when new product alternatives exist. Production and Inventory
solution techniques. Real and complex cases with different Manage J 34 (1993), 51–57
levels of complexities may also enhance the contributions of [18] K. Lee, L. Lei, and M. Pinedo. Production scheduling with
this research. The instances wherein one of the two factors, history-dependent setup times, Naval Res Log 59 (2012),
that is, complexity of operations and capacity shortage is the 58–68.
[19] A. Linhares, Theory of constraints and the combinatorial com-
main source of complexity needs further research. In these plexity of the product-mix decisions, Intl J Prod Econ 121
instances, we face a skewed situation wherein the complexity (2009), 121–129.
is introduced by one factor. For instance, it is interesting to [20] R. Luebbe and B. Finch, Theory of constraints and linear
examine the dynamic throughputs in situations where com- programming: A comparison, Intl J Prod Res 30 (1992),
plexity is mainly due to the involvement of several setup 1471–1478.
operations rather than the level of capacity shortage or the [21] C.J. Mayday, Proper use of constraint management, Prod
Inventory Manage J 35 (1994), 84.
sequence of operations. [22] N. Mishra, M. Tiwari, R. Shankar, and F. Chan, Hybrid tabu-
simulated annealing based approach to solve multi-constraint
REFERENCES product mix decision problem, Expert Syst Appl 29 (2005),
446–454.
[1] M.B. Aryanezhad and A.R. Komijan, An improved algorithm [23] G.C. Onwubolu, Tabu search-based algorithm for the TOC
for optimising product mix under the Theory of Constraints, product mix decision, Intl J Prod Res 39 (2001), 2065–2076.
Intl J Prod Res 42 (2004), 4221–4233. [24] G.C. Onwubolu and M.A. Mutingi, Optimising the multi-
[2] J.B. Atwater and S.S. Chakravorty, A study of the utilization of ple constrained resources product mix problem using genetic
capacity constrained resourcesin Drum Buffer- Rope systems, algorithms, Intl J Prod Res 39 (2001a), 1897–1910.
Prod Oper Manage 11 (2002), 259–273. [25] G.C. Onwubolu and M.A. Mutingi, Genetic algorithm
[3] S.A. Badri, M. Ghazanfari, and K. Shahanaghi, A multi- approach to the theory of constraints product mix problems,
criteria decision-making approach to solve the product mix Prod Plann Control 12 (2001b), 21–27.
problem with interval parameters based on the theory of con- [26] J. Pallant, SPSS survival manual: A step by step guide to data
straints, International Journal of Advanced Manufacturing analysis using SPSS, 4th ed. McGraw Hill, England, 2010.
Technology 70 (2014), 1073–1080. [27] M.C. Patterson, The product-mix decision: a comparison of
[4] J. Balakrishnan, and C.H. Cheng, Theory of constraints and theory of constraints and labor-based management accounting,
linear programming: A reexamination, International Journal Prod Inventory Manage J 33 (1992), 80–85.
of Production Research 38 (2000), 1459–1463. [28] G. Plenert, Optimized theory of constraints when multi-
[5] A. Bhattacharya, P. Vasant, B. Sarkar, and S.K. Mukherjee, A ple constrained resources exist, Eur J Oper Res 70 (1993),
fully fuzzified, intelligent theory-of-constraints product-mix 126–133.
decision, Intl J Prod Res 46(2008), 789–815. [29] A.J. Posnack, Theory of constraints: improper applications
[6] A. Bhattacharya and P. Vasant, Soft-sensing of level of sat- yield improper conclusions, Prod Inventory Manage J 35
isfaction in TOC product-mix decision heuristic using robust (1994), 85–86.
fuzzy-LP, Eur J Oper Res 177(2007), 55–70 [30] D. Quadt and H. Kuhn, Capacitated lot-sizing and schedul-
[7] J. Cohen, Statistical power analysis for the behavioral sciences, ing with parallel machines, back-orders, and setup carry-over,
Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1988. Naval Res Log 56 (2009), 366–384.

Naval Research Logistics DOI 10.1002/nav


368 Naval Research Logistics, Vol. 62 (2015)

[31] Singh, S. Kumar and M.K. Tiwari, Psycho-clonal based [34] B.G. Tabachnick and L.S. Fidell, Using multivariate statistics,
approach to solve a TOC product mix decision problem, Intl J 5th ed. Pearson, Boston, 2007.
Adv Manufact Technol 29 (2006), 1194–1202. [35] J.Q. Wang, S.D. Sun, S.B. Si, and H.A. Yang, Theory of Con-
[32] V.A. Sobreiro and M.S. Nagano, A review and evaluation straints product mix optimisation based on immune algorithm,
on constructive heuristics to optimise product mix based on Intl J Prod Res 47 (2009), 4521–4543
the Theory of Constraints, Intl J Prod Res 50 (2012), 5936– [36] B. Zheng, Y.C. Gao, and Y. Wang, The product-mix optimiza-
5948. tion with outside processing based on theory of constraints
[33] M.L. Spearman, On the theory of constraints and the goal oriented cloud manufacturing, Appl Mech Mater 121–126
system, Prod Oper Manage 6 (1997), 28–33 (2011), 1306–1310.

APPENDIX: FURTHER DETAILS OF THE INDUSTRIAL CASE STUDY

Table 7. Processing and setup times of operations—products A and C.

Product A Product C
Product Processing Time Setup Time Product Processing Time
Routing (Mean, Standard Deviation) (Mean,StandardDeviation) Routing (Mean, Standard Deviation) Setup Time
8↓ Normal (0.5, 0.14) Gamma (20,5) 9 Normal (0.5, 0.13) Gamma (16,7)
7 Normal (0.5, 0.19) Gamma (122,24) 12 Normal (1.35, 0.28) Gamma (22,9)
7 Normal (1.5, 0.23) Gamma (115,36) 4 Normal (0.5, 0.19) Gamma (30,11)
3 Normal (1, 0.25) Gamma (27,5) 6 Normal (0.85, 0.23) Gamma (47,22)
5 Normal (0.5, 0.11) Gamma (32,8) 6 Normal (0.75, 0.31) Gamma (42,19)
3 Normal (0.5, 0.21) Gamma (37,15) 12 Normal (2, 0.71) Gamma (48,24)
5 Normal (0.5, 0.12) Gamma (26,11) 16 Normal (0.3, 0.1) Gamma (28,12)
4 Normal (0.5, 0.21) Gamma (33,14) 6 Normal (1, 0.39) Gamma (46,18)
4 Normal (0.5, 0.17) Gamma (20,5) 12 Normal (1.4, 0.17) Gamma (22,9)
5 Normal (0.5, 0.22) Gamma (20,5) 13 Normal (4, 1.6) Gamma (32,15)
2 Normal (0.4, 0.11) Gamma (20,5) 13 Normal (5,1.9) Gamma (29,13)
2 Normal (0.5, 0.15) Gamma (20,5) 14 Normal (2, 0.8) Gamma (126,46)
7 Normal (1, 0.27) Gamma (132,29) 15 Normal (1, 0.37) Gamma (64,34)
7 Normal (1, 0.32) Gamma (112,44) 16 Normal (0.3, 0.12) Gamma (35,10)

Table 8. Processing and setup times of operations—product B.

Product B
B1 B2
Product Processing Time Product Processing Time
Routing (Mean, Standard Deviation) Setup Time Routing (Mean, Standard Deviation) Setup Time
1↓ Normal (0.25, 0.11) Gamma (22,6) 9 Normal (0.5, 0.12) Gamma (17,5)
5 Normal (0.3, 0.16) Gamma (33,10) 12 Normal (1.25, 0.26) Gamma (22,8)
6 Normal (0.5, 0.15) Gamma (51,19) 4 Normal (0.4, 0.15) Gamma (35,7)
5 Normal (1, 0.21) Gamma (30,8)
5 Normal (0.45, 0.19) Gamma (30,14)
4 Normal (0.3, 0.14) Gamma (45,17)
6 Normal (0.5, 0.15) Gamma (80,13)
5 Normal (0.55, 0.13) Gamma (30,5)
4 Normal (0.3, 0.11) Gamma (32,9)
3 Normal (1, 0.27) Gamma (28,11)
1 Normal (1.05, 0.31) Gamma (33,14)
10 Normal (0.5, 0.17) Gamma (55,16)
11 Normal (3, 0.51) Gamma (31,10)

Naval Research Logistics DOI 10.1002/nav


Golmohammadi and Mansouri: Complexity and Workload Considerations 369

Table 9. Processing and setup times of operations—products D& E.

Product D Product E
Product Processing Time Product Routing Processing Time
Routing (Mean, Standard Deviation) Setup Time Routing (Mean, Standard Deviation) Setup Time
1 Normal (1, 0.11) Gamma (14,3) 1 Normal (0.5, 0.1) Gamma (13,3)
2 Normal (2, 0.13) Gamma (15,6) 4 Normal (0.5, 0.1) Gamma (20,5)
4 Normal (0.3, 0.09) Gamma (20,10) 5 Normal (0.5, 0.06) Gamma (35,10)
5 Normal (1, 0.23) Gamma (30,12) 4 Normal (1, 0.08) Gamma (37,20)
4 Normal (0.2, 0.11) Gamma (22,5) 5 Normal (0.5, 0.1) Gamma (32,10)
5 Normal (1, 0.21) Gamma (28,4) 4 Normal (0.5, 0.1) Gamma (20,9)
7 Normal (0.75, 0.12) Gamma (31,8) 6 Normal (1, 0.12) Gamma (30,8)
8 Normal (2, 0.29) Gamma (15,3) 7 Normal (1, 0.09) Gamma (15,5)
13 Normal (1, 0.18) Gamma (21,6)

Table 10. The processing times and capacities.

Machine (resource) number Market Contribution

Product 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Demand Margin

A 1 0.9 1.5 1 1.5 0 1 0.5 0 0 0 0 0 0 0 0 2,000 1,500


B 0.5 0 1 1 2.3 1 0.5 0 0.5 0.5 3 0 0 0 0 0 5,000 2,200
C 0.5 0 0 0.5 0 2.6 2 0 0.5 0 0 4.75 9 2 1 0.6 2,000 3,000
D 1 2 0 0.5 2 0 0.75 2 0 0 0 0 1 0 0 0 3,000 1,700
E 0.5 0 0 2 1 1 1 0 0 0 0 0 0 0 0 0 3,000 2,800
Available
Capacity 8,000 17,000 8,640 15,700 24,520 13,500 10,500 16,920 7,200 3,360 20,000 11,520 21,200 4,800 2,400 1,440
Required
Capacity 10,000 7,800 8,000 15,500 23,500 13,200 13,750 7,000 3,500 2,500 15,000 9,500 21,000 4,000 2,000 1,200
Capacity
Difference −2,000 9,200 640 200 1,020 300 −3,250 9,920 3,700 860 5,000 2,020 200 800 400 240

Naval Research Logistics DOI 10.1002/nav

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy