Cutting Stock Problems
Cutting Stock Problems
Cutting Stock Problems
net/publication/226628182
CITATIONS READS
41 5,646
2 authors:
All content following this page was uploaded by José M. Valério de Carvalho on 24 October 2014.
H. Ben Amor
J.M. Valério de Carvalho
G–2004–30
April 2004
Les textes publiés dans la série des rapports de recherche HEC n’engagent que la responsabilité de leurs
auteurs. La publication de ces rapports de recherche bénéficie d’une subvention du Fonds québécois de la
recherche sur la nature et les technologies.
Cutting Stock Problems
H. Ben Amor
GERAD and Département de mathématiques et génie industriel
École Polytechnique
C.P.6079, succ. Centre-ville, Montréal, H3C 3A7 Canada
Hatem.Ben.Amor@gerad.ca
April, 2004
Résumé
1 Introduction
The cutting stock problem (CSP) was one of the problems identified by Kantorovich in his
paper entitled “Mathematical methods of organizing and planning production”, that first
appeared in 1939, in Russian, and was later published in Management Science (1960). The
problem consist of determining the best way of cutting a set of large objects into smaller
items. There are large potential economic savings resulting from the solution of this kind
of problems. CSP are encountered in a wide variety of industrial applications, such as in
the steel, wood, glass and paper industries, and also in service sector applications, such as
cargo loading and logistics.
In this paper, we focus on one-dimensional problems. Since Gilmore and Gomory
proposed the use of column generation (CG) to solve its linear programming (LP) re-
laxation (Gilmore and Gomory (1961, 1963)), several solution approaches for this prob-
lem were based on algorithms using column generation complemented by heuristics. The
nineties mark a turning point in this field: several algorithms combining column genera-
tion and branch and bound were proposed to solve the CSP. It was also recognized that
those algorithms were also useful to solve instances with many items of many different
sizes, yielding a low average demand, a problem that is usually denoted as the bin packing
problem (BPP) or the Binary Cutting Stock Problem (BCSP). In this problem, items are
assigned to bins in such a way that the capacity of the bins is not exceeded and the number
of bins is minimized.
The one-dimensional CSP and BPP are essentially the same problem, even though,
under Dyckhoff’s system (Dyckhoff (1990)), they were classified as 1/V/I/R and 1/V/I/M,
respectively. The reason that possibly motivated the use of a different classification for
them was that different solution methods had been traditionally used to address them.
Literature reviews often include not only the CSP and the BPP, but also other problems
closely related to them, as knapsack, vehicle loading and pallet loading problems, as well as
many others. Examples are Sweeney and Paternoster (1992) and Dowsland and Dowsland
(1992). The book by Dyckhoff and Finke (1992) identifies more than 700 papers on Cutting
and Packing, and classifies them. An annotated bibliography was proposed by Dyckhoff
et al. (1997). Many papers refer to case studies where column generation is used to get
solutions for real world applications in the aluminium industry (Stadtler (1990); Helmberg
(1995)), in the steel industry (Valério de Carvalho and Guimarães Rodrigues (1995)), in
the paper industry (Goulimis (1990)), and in the forest industry (Sessions and Guanda
(1988), Sessions and Garland (1989)).
This paper is organized as follows. In Section 2, we review the cutting stock models
introduced by Kantorovich and by Gilmore and Gomory, a model based on arc-flows, and
an acyclic network based vehicle routing problem (VRP) model. We also comment on
the quality of the bounds that result from their linear programming relaxations. Models
with strong linear programming relaxations are of crucial importance in solving integer
programming problems. The Dantzig-Wolfe decomposition is a tool that can be used to
obtain stronger models when extended to integer programming problems. We present
Les Cahiers du GERAD G–2004–30 2
some concepts from this method and its application to Cutting Stock. In Section 3, we
address the integer solution of cutting stock problems, using heuristics and column gen-
eration combined with branch and bound. We review the main issues, several branching
schemes, and comment on computational results. In Section 4, we show that stabilization
techniques can improve the behaviour of column generation for solving the linear program-
ming relaxation of the Gilmore-Gomory model. In Section 5, we present some extensions
of the one-dimensional cutting stock problem and, in Section 6, some directions for future
research.
K
X
min xk0 (1)
k=1
XK
subj. to xki ≥ bi , i = 1, . . . , m (2)
k=1
Xm
wi xki ≤ W xk0 , k = 1, . . . , K (3)
i=1
xk0 = 0 or 1, k = 1, . . . , K (4)
xki ≥ 0 and integer, i = 1, . . . , m, k = 1, . . . , K (5)
where K is a known upper bound on the number of rolls needed, xk0 = 1, if roll k is used,
and 0, otherwise, and xki is the number of times item i is cut in roll k. Constraints (2)
enforce the satisfaction of the demand for the items, and constraints (3) guarantee that
the items cut in a roll do not exceed its capacity. The latter group is usually denoted
as the knapsack constraints. Indeed, when xk0 = 1, (3) is exactly a knapsack constraint
with capacity W ; and when xk0 = 0, (3) is a knapsack constraint with capacity 0 (i.e. all
variables xki equal 0 and the kth bin is not used).
A lower bound for the integer optimum can be obtained from the solution of the linear
programming relaxation, which results from substituting the two last constraints for 0 ≤
xk0 ≤ 1 and xki ≥ 0, respectively. This bound can be very weak. It isPequal to the minimum
amount of space that is necessary to accommodate all the items, ⌈ m i=1 bi wi /W ⌉, and can
Les Cahiers du GERAD G–2004–30 3
be very poor for instances with large waste. In the limit, as W increases and all the items
Pm
have a size wi = ⌊W/2+1⌋, the integer optimal value is i=1 bi wi whereas the lower bound
approaches (1/2) m
P
b w
i=1 i i (Martello and Toth (1990)). This is a drawback of the model.
However, computational experiments show that the most difficult instances are those with
very small loss (i.e. their linear relaxation optimal value is very difficult to obtain) for whom
the lower bound is equal to optimal integer objective. Furthermore, its solution space has
symmetry. Different solutions to the model, with the same cutting patterns swapped
in different rolls, will correspond to the same global cutting solution. This means that
any efficient branching has to make decisions independently from k. Branching directly on
individual xk0 − variables or xki − variables will not eliminate the current fractional solution.
Define P as the set of all feasible patterns and let λp be a decision variable that denotes
the number of rolls cut according to cutting pattern p, for all p ∈ P. The CSP is modelled
as follows:
X
min λp (8)
p∈P
api λp ≥ bi , i = 1, 2, . . . , m
X
subj. to (9)
p∈P
p
λ ≥ 0 and integer, ∀p ∈ P. (10)
The number of columns in formulation (8)–(10) may be very large even for moderately
sized problems. As it is usually impractical to enumerate all the columns, Gilmore and
Gomory proposed column generation to solve its LP relaxation (Gilmore and Gomory
(1961)). The problem is initialized with a set of cutting patterns (for instance, each one
W
with multiple copies of the same item in quantities ⌊ w i
⌋, ∀i), and the dual information is
used to price the columns out of the master problem. Let π = (π1 , . . . , πm ) be the dual
variables associated to the constraints (9), and π̄ the dual optimal solution at a given
column generation iteration. The “most attractive” column is given by the solution of the
following knapsack problem:
m
X
max π̄i ai (11)
i=1
Les Cahiers du GERAD G–2004–30 4
m
X
ai wi ≤ W (12)
i=1
ai ≥ 0 and integer, (13)
which corresponds to finding the column Amin with the minimum reduced cost c̄min =
1 − π̄Amin = minp∈P (1 − π̄Ap ). If the reduced cost is negative, the column is added to the
restricted master problem, which is re-optimized; otherwise, the solution is optimal (to the
linear relaxation).
A lower bound can be easily calculated at any iteration, using duality. In matrix form,
the dual of the CSP is max{πb : πAp ≤ 1, π ≥ 0}. As seen, 1 − π̄Amin ≤ 1 − π̄Ap , ∀p ∈ P,
which is equivalent to (π̄/π̄Amin )Ap ≤ 1, ∀p ∈ P. This means that (π̄/π̄Amin ) is a feasible
solution to the dual of the CSP (with all valid columns enumerated). The value of this
feasible dual solution, (π̄/π̄Amin )b, is a lower bound to the value of the primal problem,
and is equal to the value of the optimal current solution, π̄b, divided by the optimal value
of the knapsack subproblem, π̄Amin (Farley (1990)). This bound can be used to cut-off
the tails of column generationP processes. However, even if this bound is better than the
lagrangean bound b π + c̄min p∈P λp (where c̄min = 1 − π̄Amin is the minimum reduced
T
cost computed at some CG iteration), it does not cut more than one iteration in practice
(Ben Amor (1997)). This may be explained by the following. Farley’s lower bound can be
written as z̄/(1 − c̄min ) where z̄ is the optimal value of the restricted master problem at
the current CG iteration (it canPalso be obtained by substituting the optimal value of the
linear relaxation of CSP zlp to p∈P λp in the lagrangean bound expression above, see Ben
Amor (1997)). In order to have z̄/(1 − c̄min ) close to z̄, c̄min should be close to 0 which
usually happens during the very last CG iterations. This lower bound may be improved
by considering items that need to be cut in different patterns separately (see Ben Amor
(1997) for details).
The bound given by the LP relaxation of Gilmore-Gomory’s model is known to be very
tight. Most of the one-dimensional cutting stock instances have gaps smaller than one, and
we say that the instance has the integer round-up property, but there are instance with
gaps equal to 1 (Marcotte (1985, 1986)), and as large as 67 (Rietz and Scheithauer (2002)).
It has been conjectured that all instances have gaps are smaller than 2, a property denoted
as the modified integer round-up property (Scheithauer and Terno (1995)).
min cx (14)
subj. to Ax = b (15)
x∈X (16)
x≥0 (17)
x integer, (18)
where X is a nonempty convex rational polyhedron and let S be the set defined by the
constraints (16)-(18) i.e.
S = {x : x ∈ X, x ≥ 0, x integer}.
There are two approaches for decomposing (14)-(18) in order to obtain an equivalent
IP formulation. In the convexification approach, constraints (16)-(17) are replaced with
the tighter constraint
x ∈ C = conv(S)
while keeping the same set of (integer) feasible solutions. Any x ∈ C is written as the
sum of a convex combination of (integer) extreme points of C and a nonnegative linear
combination of its (integer) extreme rays. The corresponding equation replaces constraints
(16)-(17) and x is substituted in (14) and (15). We obtain a new IP formulation for the
original problem in terms of new variables λ, but integrality is required for original variables
x. If integrality is required for variables λ, optimality and even feasibility may be lost (Ben
Amor (1997)).
In the Discretization approach, based on a discretization theorem of Nemhauser and
Wolsey (1988), any x satisfying (16)-(18) is expressed as the sum of a binary convex combi-
nation of a finite set of integer points of S and a nonnegative integer linear combination of
(integer) extreme rays of C = conv(S). Substituting in (14)-(18), we obtain an IP formu-
lation of the original problem in terms of new variables λ associated to the set of columns
(points and rays). Integrality may equivalently be required for original variables x or new
variables λ (Ben Amor (1997)).
Remarks
1. The set of rays can be chosen to be identical for both IP formulations that are obtained
via decomposition, but the sets of points are generally different. However when the
elements of S have all their components in {0, 1}, both formulations have the same
sets of columns.
2. Linear relaxations (LR) are obtained by dropping integrality requirements. First, since
integrality requirements are partially taken into account inside the columns, both
formulations produce stronger LP bound than the linear relaxation of the original
formulation (14)-(17). However, when the set defined by (16)-(17) has the integrality
Les Cahiers du GERAD G–2004–30 6
property, the original and decomposed formulation have the same LP bound (Geof-
frion (1974)).
Because the number of columns is very large and all needed columns may not be
known in advance, linear relaxations are solved by column generation. Let π be the
dual solution at optimality of the restricted master at some CG iteration. For the
convexification approach, the subproblem is
min (c − π T A)x
subj. to x ∈ C.
For the discretization approach, the subproblem is
min (c − π T A)x
subj. to x ∈ S.
In both cases, the attractive columns that are successively inserted in the master
problem correspond to optimal solutions of the subproblem, being extreme points or
extreme rays of C. Hence the two IP formulations have identical LP bounds.
3. For the convexification approach, integrality is required for the original variables x.
Hence, branching schemes should be developed for these variables. This is some-
how natural because such decisions are more likely to preserve the structure of the
subproblem and to be more easily enforced in it. For the discretization approach,
integrality may be required either for the original variables x or the new variables λ.
But, efficient branching schemes should be based on original variables for the same
reasons or at least on the subproblem formulation. Indeed, for a branching scheme
based on new variables λ, one must find a way to “express” decisions in terms of the
original variables to be able to enforce them in the subproblem. Branching directly
on new variables is untractable in practice due to the very large number of columns
and especially, because forbidding the generation of any column by the subprob-
lem may need obtaining the k th best solution of the subproblem at depth k of the
branch-and-bound tree.
4. Dantzig-Wolfe decomposition is well suited to problems with block-angular structure
subproblems. Considering each block separately, giving raise to as many subproblems
as many block, is generally preferred to considering only one subproblem. The main
reasons are that the number of possible columns in the former case is much fewer than
in the latter case, and that having a column for each block in the master problem
allows implicitly taking into account many combinations of these columns whereas
having one subproblem enforces the use of only one combination of these columns in
the master problem (Ben Amor (2002)).
When all subproblems are identical, only one subproblem is solved at each column
generation iteration and the master problem size can be significantly reduced by
aggregating variables corresponding to different commodities (Vanderbeck (2000b),
Ben Amor (2002)). We illustrate this issue in the next section while decomposing
Kantorovitch formulation.
Les Cahiers du GERAD G–2004–30 7
5. Enforcing decisions in the subproblem is a main issue when developing any branching
scheme for new variables. In fact the structure of the subproblem may be highly
affected by branching decisions and solution algorithms should be, at least, adapted.
This is the case when branching decisions are made with respect to variables λ and
not all columns needed for an integer optimal solution are not extreme points or rays
of the subproblem. Branching on original variables x faces another kind of difficulties
when many identical subproblems are considered. Aggregation of λ-variables breaks
the symmetry in the master problem. It is of crucial importance to build a branching
scheme that breaks, or at least highly reduces, symmetry between subproblems. In
the binary case, i.e. all λ variables take 0 − 1 values, right-hand side of constraints
are binary, and the subproblem is solved as a 0 − 1 problem, this can be easily
done. However, in the general integer case, this is a difficult task and to the best
of our understanding, the choice of the original formulation and the corresponding
subproblem is a key issue to the efficiency of branching schemes.
2.4 Decomposition of Kantorovitch formulation
Both convexification and discretization approaches can be applied to CSP giving raise to
slightly different formulation. CSP has a block-diagonal structure and gives raise a |K|
subproblems. The set of integer feasible points of subproblem k is
m
X
Sk = {x = (x0 , . . . , xm )T : xi wi ≤ W, x ≥ 0, x integer,x0 ∈ {0, 1}}.
i=1
First, all set Sk , and hence all subproblems, are identical (Sk = S, ∀k = 1, . . . , K). Note
that S is bounded. If the binary component x0 of x takes value 0, all other components
equal 0; this is the empty pattern. When it takes value 1, any feasible pattern (including
the empty one) may be represented by the values of the other components. Because
the number of used rolls is minimized, the empty pattern will never part of the solution
with component x0 = 1. Let P0 and P denote the set of all feasible patterns and the set
of nonempty patterns, respectively. In a same manner we define Ω0 and Ω the set of all
feasible patterns and the set of nonempty patterns that are extreme points of C = conv(S).
The index 0 is associated to the empty pattern in P0 and Ω0 .
{xp }p∈Ω0 being the set of extreme points of C, any point of xk in Ck can be expressed
as a convex combination of these points.
X
xk = k p λkp = 1, λkp ≥ 0, ∀p ∈ Ω0 .
P
p∈Ω0 λp x ,
p∈Ω0
Substituting in (1)-(5), we obtain the formulation (Ben Amor (1997), Vance (1998))
K
XX
min λkp (19)
p∈Ω k=1
Les Cahiers du GERAD G–2004–30 8
K
XX
subj. to xpi λkp ≥ bi (20)
p∈Ω k=1
X
λkp = 1, k = 1, . . . , K (21)
p∈Ω0
λkp ≥ 0, ∀p ∈ Ω0 (22)
X
xk = λkp xp , xk integer, k = 1 . . . , K. (23)
p∈Ω
A slightly different formulation can be obtained by considering the set of all points of S.
Any xk ∈ Sk is written as a binary convex combination of all points in S.
X
xk = k xp , λkp = 1, λkp ∈ {0, 1}, ∀p ∈ P0 .
P
λ
p∈Ω0 p
p∈P0
λkp ≥ 0, ∀p ∈ P0 , k = 1 . . . , K (27)
λkp ∈ {0, 1}∀p ∈ P0 . k = 1 . . . , K (28)
Such variable counts in fact the number of rolls cut owing to pattern p for p ∈ Ω, and the
number of unused patterns for p = 0. Replacing in either formulation, index k disappears
Les Cahiers du GERAD G–2004–30 9
from the objective function and covering constraints. Computing a λkp −solution from a
λ−solution is always possible. We have in fact a transportation problem
P (|Ω0 | supply (= λp )
nodes and K demand (= 1) nodes whose feasibility conditions is p∈Ω0 λp = K which
follows trivially from (29). This argument holds for both continuous and integer cases.
In particular, in the discretization approach, integrality may be required for variables
λp leading to the formulation of Gilmore and Gomory (8)-(10). In the convexification
approach, the obtained formulation
X
min λp (30)
p∈Ω
X
subj. to xpi λp ≥ bi (31)
p∈Ω
X
λp = K (32)
p∈Ω0
λp ≥ 0, ∀p ∈ Ω0 (33)
K
X
λp = λkp , p ∈ Ω0 (34)
k=1
λkp ≥ 0, ∀p ∈ Ω0 (35)
X
xk = λkp xp , x integer, k = 1, . . . , K. (36)
p∈Ω
is basically different from the one of Gilmore and Gomory. It considers a subset of the set
of columns used in (8)-(10) and integrality is still required for variables xk .
Linear relaxations obtained by dropping integrality constraints are solved by column
generation. If π̄ is the dual optimal solution of the restricted master P problem at some
column generation iteration, the reduced cost of any column x is 1 − m
p
i=1 π̄i xpi . Since
generating the most attractive column aims at finding the minimum reduced cost column,
that is an integer feasible knapsack solution, the subproblem is formulated as
m
X
max π̄i xi
i=1
Xm
xi wi ≤ W
i=1
xi ≥ 0 and integer
Example 1 Figure 1 shows the graph associated with an instance with bins of capac-
ity W = 5 and items of sizes 3 and 2. In the same Figure, a path is shown that corresponds
to 2 items of size 2 and 1 unit of loss. 2
Shapiro (1968) used this kind of formulation to model the knapsack problem as the
problem of determining the longest path in a directed graph. Likewise, it can be used
to model BPP. If a solution to a single bin corresponds to the flow of one unit between
vertices 0 and W, a path carrying a larger flow will correspond to using the same packing
solution in multiple bins.
By the flow decomposition properties (see Ahuja et al. (1993)), non-negative flows can be
represented by paths and cycles. The graph G is acyclic, and any flow can be decomposed
in directed paths connecting vertex 0 to vertex W. A solution with integer values of flow
in every arc, can be transformed into an integer solution to the BPP.
The problem is formulated as the problem of determining the minimum flow between
vertex 0 and vertex W with additional constraints enforcing that the sum of the flows in
the arcs of each order must be greater or equal to the number of items of a given size.
Decision variables xij , associated with the arcs defined above, correspond to the number
Les Cahiers du GERAD G–2004–30 11
w2 w2 w2 w2
- - - -
0s -1s -2s -3s -4s -5s
loss loss loss loss loss
- - -
w1 w1 w1
w2 w2
- -
0s 1s 2s 3s 4s -5s
loss
of items of size j − i placed in any bin at the distance of i units from the beginning of the
bin. The number of variables is O(mW ). The model is as follows:
min z (37)
subject to
X X −z , if j = 0
+ xij − xjk = 0 , if j = 1, . . . , W − 1 (38)
z , if j = W
(i,j)∈A (j,k)∈A
X
xk,k+wd ≥ bd , d = 1, 2, . . . , m (39)
(k,k+wd )∈A
xij ≥ 0 , ∀(i, j) ∈ A (40)
xij integer , ∀(i, j) ∈ A (41)
1 2
o d
i1 i0 m1 m 0
i2 m2
i3 m3
m4
m
XX
min xko(k)j1 (42)
k∈K j=1
subject to
X X ni X
( xkjiv ) ≥ bi , i = 1, . . . , m (43)
k∈K v=1 (j,iv )∈Ak
X
xko(k)j = 1, k ∈ K (44)
(o(k),j)∈Ak
X X
xkij − xkji = 0, i ∈ N, k ∈ K (45)
(i,j)∈Ak (j,i)∈Ak
X
xkid(k) = 1, k ∈ K (46)
(i,d(k))∈Ak
Xm Xni X Xm
wi ( xkjiv ) ≤ W( xko(k)j1 ), k ∈ K (47)
i=1 v=1 (j,iv )∈Ak j=1
The objective function (42) aims at minimizing the number of nonempty routes (patterns).
Each constraint (43) requires that the number of items i cut on all rolls satisfy the demand
bi and constraints (44)-(46) enforce flow conservation for each commodity k while requiring
that 1 unit of flow be shipped from o(k) to d(k). Finally, (linear) constraints (47) ensure
that capacity limit for any used rolls is not overpassed and constraints (47) require that
all variables be binary.
Applying either the convexification or the discretization approach of extended D-W
decomposition to this formulation leads to Gilmore and Gomory model, with a capaci-
tated shortest path as subproblem. Integrality may equivalently be required either for
λ−variables or xk −variables. It is more natural and easier to develop a branching scheme
based on xk −variables. However, there is a major issue here, symmetry between rolls, that
must be taken in account. Branching schemes developed for several models are discussed
later in the paper.
Formulation (42)-(48) may be seen as a disaggregated form of Kantorovitch formula-
tion (1)-(4). However, there is a difference that all patterns are extreme points of the
subproblem (44)-(48) and hence can be generated, if needed, by modifying arc costs. This
is indeed done by considering different dual variable values during column generation. It
is sufficient to modify the arc costs in an adequate manner, e.g. generating different dual
variables values at some column generation iteration (Ben Amor (1997)).
The binary case, i.e. all demands bi equal 1 and any item may not be cut more than once
on a pattern, present some interesting particularities. First, each item is represented by a
Les Cahiers du GERAD G–2004–30 14
single node and must be visited exactly once by a single root. This allows the aggregation of
all commodities and index k is no longer needed in the formulation. This has an important
effect on the branching strategy as will be seen later.
3 Integer solutions
Up to the nineties, it was recognized that it was not easy to combine column generation
with tools to obtain integer solutions to the CSP. We quote the final comments in Gilmore
(1979):
“A linear programming formulation of a cutting stock problem results in a
matrix with many columns. A linear programming formulation of an integer
programming problem results in a matrix with many rows. Clearly a linear
programming formulation of an integer cutting stock problem results in a matrix
that has many columns and many rows. It is not surprising that it is difficult
to find exact solutions to the integer cutting stock problem.”
3.1 Heuristics
Gilmore and Gomory (1961) suggest rounding to obtain good quality integer solutions.
Rounding up the fractional variables of the optimal solution of the column generation model
guarantees a heuristic solution of value zH : zH ≤ zLP + m, where zLP is the optimum
of the LP relaxation. The First Fit Decreasing (FFD) heuristic provides solutions for the
BPP, and the CSP, with an absolute performance ratio of 3/2, i.e., zH ≤ 3/2 z ∗ , where z ∗
is the value of the optimum (Simchi-Levi (1994)). This ratio can be improved to 4/3, and
this bound is tight, if a heuristic based on the solution of Gilmore-Gomory model is used
(Chan et al. (1998)).
Rather than simply rounding up, more elaborate rounding heuristics were devised to
improve effectiveness. Wäscher and Gau (1996) did extensive computational experiments
with instances with average demands of 10 and 50, and found the optimal solutions in
almost all cases. However, if the average demand is very low, as in the BPP, where it can
be close to one or even equal to one, the variables are often a fraction of unity, and it is not
so easy for heuristics to find the optimum. Other combinatorial enumeration techniques
can be used in this case, as MTP (Martello and Toth (1990)) or BISON (Scholl et al.
(1997)). The value of the lower bound of Gilmore-Gomory solution has also been used to
prove the optimality of solutions obtained with MTP (Schwerin and Wäscher (1999)).
To generate the initial solution for column generation, instead of using FFD, some
authors resort to pseudo-polynomial heuristics, based on the solution of a series of knapsack
problems. These are greedy procedures that iteratively select the best cutting pattern
using the items in a list. Initially, the list has all the items; at each iteration, the items in
the knapsack solution are removed, and the process is repeated until the list is exhausted.
Their computation time is not significant in the framework used, and they generally provide
better starting solutions, with, at least, some very good cutting patterns, even though the
Les Cahiers du GERAD G–2004–30 15
last patterns may be very poor (see, for instance, Vanderbeck (2000b); Valério de Carvalho
(2003) for a definition of the knapsack problems used).
3.2 Branch-and-price
In the nineties, several attempts to combine column generation with branch-and-bound
succeed in obtaining the optimum integer solution of larger instances of some integer pro-
gramming and combinatorial optimization problems. This technique has been denoted as
branch-and-price. Stronger LP models and good quality lower bounds are of vital im-
portance when using LP based approaches to solve integer problems. Branch-and-price
also proved to be a useful framework for the solution of quite large instances of both CSP
(1/V/I/R) and BPP (1/V/I/M), that other combinatorial enumeration branch-and-bound
algorithms failed to solve to optimality.
In branch-and-price, it is desirable to ensure compatibility between the restricted mas-
ter problem and the subproblem. First, the partition rule should not induce intractable
changes in the structure of the subproblem. Desirably, it should remain the same opti-
mization problem both during the solution of the LP relaxation and the branch-and-price
phase. The second issue is symmetry. Branching strategies should be devised that parti-
tion the solution space in mutual exclusive sets, which are explored in different nodes of
the branch-and-bound search tree. Symmetry is detrimental if the same solution is ex-
plored in different nodes of the branch-and-price tree. Other issues are also important to
obtain more robust algorithms. Balanced partition rules should be selected: the branching
constraints should partition the solution set evenly among subtrees. It is also desirable
to select the branching constraints so that stronger decisions are taken at lower levels
of branch-and-bound tree. Most applications of branch-and-price are for problems with
binary variables (for a review, see Barnhart et al. (1998)). Finally, note that CSP have
general integer variables, not restricted to be binary.
The strategy of imposing branching constraints directly on the variables of the refor-
mulated model poses the following difficulty: a column that is restricted by a branching
constraint in the master problem may turn out to be most attractive column generated
by the subproblem. To deal with this problem, some authors keep track of the columns
already present in the master problem that must not be regenerated. The subproblem has
to be solved by an enumeration scheme that rejects the forbidden columns (Degraeve and
Schrage (1999); Degraeve and Peeters (2003); Belov (2003)). It is widely accepted that, for
most integer programming problems, the best strategy to combine column generation with
branch-and-bound is to use branching constraints based on the variables of the original
model (Desrosiers et al. (1995)).
Kantorovitch formulation
Even if it appears to be more natural, efficiency of branching on variables xk is compromised
due to the symmetry between rolls. For instance, fixing any xk0 to 1 or 0, meaning that
roll k is used or not, has no effect on the problem because all rolls are identical. Once
the number of rolls (or a lower and/or upper bound) at optimality is known, one may fix
xk0 variables corresponding to these rolls to 1 and others to 0, so that many variables are
eliminated from the problem. Moreover, bounding any xki (k = 1, . . . , K, i = 1, . . . , m) by
any integer value does not eliminate any fractional solution because such a solution may
be retrieved by using another roll k for item i.
The solution of CSP is composed of patterns and each pattern is composed of a set of
items to be cut together on a same roll. Hence at items level, the information that is useful
to build a solution should give either a set of items to be/not to be cut on a same set of
rolls, or a maximum or/and minimum numbers of copies of an item to be cut on a set of
rolls.
Vanderbeck’s rule Vanderbeck compared several branching schemes and suggested the
use of one that is based on the binary representation of the cutting pattern columns of the
′
Gilmore-Gomory model. The binary representation of a column Ap is a 0-1 vector Ap with
m
size m′ = d=1 md , with md = ⌈log2 (ldmax +1)⌉, being ldmax the upper bound on the number
P
′
of copies of item d in a cutting pattern, as defined above. We will denote the elements of Ap
′
as ak , k = 1, . . . , m′ , dropping the index p, for the sake of clarity. The binary representation
is such that the number of items produced for order d is ad = m
P d −1 j ′
j=0 2 apd +j , d = 1, . . . , m,
Pd−1
where pd = 1 + j=1 mj .
Given a fractional solution of Gilmore-Gomory model, it is always possible to find
subsets of rows O and P ⊂ {1, . . . , m′ }, and a subset of columns
′ ′
P̂ = {p ∈ P : ak = 0, ∀k ∈ O and ak = 1, ∀k ∈ P }
Vance rule Vance (1998) first proposed ed a branching scheme based on integrality
requirements in (19)-(23). This strategy suffers however from the symmetry between rolls,
as said before, and was beaten by a quiet complex branching strategy based on Gilmore
and Gomory model variables for medium size problems. This latter strategy is based on
the total flow of columns containing a certain minimum number of copies of items from
a set S. It suffers two major drawbacks: there’s no guaranty on the size of S and the
subproblem is no longer a knapsack since many additional variables are needed. Owing
to the author, it should become untractable even for small depth nodes of the branching
tree. An improving strategy is based o the use of maximal patterns, but it amounts to
branching directly on λp variables. The structure of the subproblem can be preserved for
one branch thanks to a specialized algorithm and is highly affected for the other branch
(even the maximal pattern property is lost). This last strategy has been successfully used
to solve medium size problems.
Arc-flow model
The arc-flow model provides a branching scheme for a branch-and-price algorithm for the
CSP that preserves the structure of the subproblem, which is as a longest path problem in
an acyclic digraph with modified costs, that can be solved using dynamic programming.
Valério de Carvalho (1999) implemented a column generation algorithm based on a master
problem with variables of the arc-flow model. It starts with a subset of arcs corresponding
to an initial solution, and new arcs are added to the master problem if they belong to an
attractive path and are not already present in the master problem. In the arc-flow model,
there are flow conservation constraints for the nodes. If the new arcs are incident into
nodes not previously considered, new constraints are explicitly added to the formulation.
Branching constraints are imposed on single arc-flow variables of the master problem of
the following type, where α = xij is fractional:
The model has symmetry, because different paths may correspond to the same cutting
pattern. In instances with a small average number of items per bin, which happen to
be rather difficult instances, if reduction criteria are used, there is low symmetry, and its
undesirable effects are not so harmful. Computational results show the optimal solutions
of all the bin packing instances of the OR-Library (Beasley (1990)). These instances have
demands of few items of each size. For example, the larger instances, of the t501 class,
have about 200 different item sizes and a total of 501 items, yielding an average demand
of about 2.5.
A different strategy that eliminates symmetry and preserves the structure of the sub-
problem is to use a master problem with the columns of the reformulated model of Gilmore-
Gomory and branching constraints based on the arc-flow variables, which are explicitly
Les Cahiers du GERAD G–2004–30 18
added to the formulation (Ben Amor (1997), Alves and Valério de Carvalho (2003)). Any
column of Gilmore-Gomory’s model involves a unique set of arc-flow variables, if we con-
sider that items are placed by decreasing value of width. Branching constraints imposed
on a given arc-flow will constrain the value of a definite set of columns of the reformulated
model. Penalties and prizes resulting from branching constraints of the type greater-
than-or-equal-to or less-than-or-equal-to, respectively, only affect the reduced cost of the
corresponding arc in the subproblem.
Let πd , d = 1, . . . , m, be the dual variables associated with the demand constraints, and
µ and ν the vectors of dual variables associated with the branching constraints of each
type Let Gw w w
(i,j) ⊆ G and H(i,j) ⊆ H
w be the sets of branching constraints imposed on
F0 (0) = 0
l−1
X
Fd (x) = max max
{Fd−1 (x − lwd ) + c̄x−(k+1)wd ,x−kwd },
valid l: 0≤l≤ld
k=0
x = 0, 1, . . . , W, d = 1, ..., m
where ldmax = min(bd , ⌊W/wd ⌋) is the upper bound on the number of copies of item d in a
cutting pattern, which is limited by the demand for item d and the size of the roll.
This branching scheme can be easily extended to branching constraints based on sets
of arcs incident on a given node. In this case, the dual variable of a branching constraint
acts on all arcs in the set. Nevertheless, the computational burden is heavier for instances
with larger values of roll widths.
VRP model
Integrality may be required equivalently either for variables xkij , λkp , or λp . In the first two
cases symmetry is very harmful to branching scheme efficiency. Aggregating flows on arcs,
one obtains variables
XK
xij = xkij
k=1
that count the number of columns (rolls) using arc (i, j). These variables must be integer
for any integer solution to CSP. Hence, for any solution such that xij = α is fractional for
some arc (i, j) one creates two nodes by adding either the constraint
xij ≤ ⌊α⌋
Les Cahiers du GERAD G–2004–30 19
or the constraint
xij ≥ ⌊α⌋
to the master problem. The dual variables associated to these constraint will affect the arc
costs in the subproblem at each column generation iteration and the subproblem structure
remains unchanged. Since all columns (patterns) are extreme points of the subproblem,
any column that is needed to attain integer optimality can be generated in this way.
However in the general integer case, i.e. a node may be visited by more than one path
in an optimal integer solution, one can obtain a fractional solution (λp fractional) while
all xij are integer (an example is given in Ben Amor (1997)). In this case, there exists at
least one item i or which an associated node is visited more than once in the solution. A
new item with demand 1 is created while the demand of i is decreased by 1. The master
problem covering constraints are modified following these changes. The worst case happens
when all items are completely disaggregated and the problem becomes a BCSP where all
items have demand = 1. In this case the branching scheme using aggregated flow variables
is convergent since
xij ∈ {0, 1}, ∀(i, j) ⇔ λp ∈ {0, 1}, ∀p.
This branching scheme preserves the structure of the subproblem as a constrained shortest
path and decisions are directly enforced in the subproblem while the master problem size
remains unchanged. Constraints of the type xij = 0 simply amount to removing the arc
from the subproblem. On the other hand, the constraints of the type xij = 1 amount to
removing all arcs out of node i and all arcs into node j. It was successfully used to solve
the VRP with time windows using branch-and-price (Desrochers et al. (1992)) and proved
to be efficient for BCSP (Ben Amor (1997)). The main difficulty in BCSP is the solution
of linear relaxation by column generation. But Ben Amor (2002) showed that this may be
done as efficiently as for the corresponding CSP either by aggregating identical size items
or by using deep dual-optimal inequalities (Ben Amor et al. (2003)).
Branching on aggregated flow variables has many interesting properties. First, it al-
lows the use of several score functions to choose the branching variable. Scores take into
account the weights of items. Lower bounds and preprocessing procedure of Martello and
Toth (1990) can be extended to the problems obtained at each branch-and-bound node.
The experiments carried out by Ben Amor (1997) show that no disaggregation has been
necessary and the branching scheme turns out to be efficient for usually used test prob-
lems. Even for the binary case, branching on aggregated flow variables turns out to be
very efficient. Also fixing several variables at once proved to be efficient in a depth first
strategy and no backtracking has shown to be necessary. This due to the fact that nearly
all solved problems have the integrality property, i.e. the optimal integer value is obtained
by rounding up the linear relaxation optimal value.
General comments
An efficient branching scheme for column generation should have a smaller tree than the
one resulting from branching directly on λ−variables. The rules of Vanderbeck and Vance
Les Cahiers du GERAD G–2004–30 20
have the drawback of significantly modifying the subproblem structure. Moreover, the
number of possible branching nodes is very high. Branching scheme based on VRP model
has the advantage of preserving a constrained shortest path as subproblem. Possible disag-
gregation may lead to subproblem of larger size deeper in the branching tree which breaks
the rule that deeper in the branching tree, subproblems and desirably master problems
should become easier to solve. Compared to this method, the branching based on the
arc-flow model presents the advantage that no disaggregation is needed. Moreover, both
strategies allow using more sophisticated branching rules based on xij variables. Besides
all these intrinsic differences, all branching schemes lead to efficient solution of classical
test problems. This is due to the fact that these problems have zero gap, the use of depth
first strategy, and the use of several heuristics.
4 Stabilization
Column generation processes are known to have a slow convergence and degeneracy prob-
lems. There are often large oscillations in the values of the dual variables from one iteration
to the next. Primal degeneracy also arises: in many iterations, adding new columns to
the restricted master problem does not help to improve the objective value. Recent com-
putational experiments show that these problems can be mitigated using dual-optimal
inequalities (Ben Amor et al. (2003)) and stabilization methods (Ben Amor (2002)).
From the dual standpoint, column-generation processes can be viewed as dual cutting-
plane algorithms (Kelley Jr. (1961)), in which the restricted set of variables used to initialize
the restricted master problem provides a first relaxation of the dual space. Clearly, better
heuristics for the starting solution provide tighter relaxations. Then, at each iteration, dual
feasibility cuts are added to the model to eliminate the previous undesired dual solution.
The dual-space relaxation is successively tightened, until a feasible dual solution to the
entire problem is found.
Consider the LP relaxation of the CSP, min{cx : Ax = b, x ≥ 0}, where the columns
of A correspond to valid cutting patterns, whose dual is max{πb : πA ≤ c}. The following
inequalities are a family of dual-optimal inequalities (Ben Amor (2002); Valério de Carvalho
(2003)), meaning that they are valid inequalities for the optimal dual space of the CSP.
Any optimal dual solution will obey:
X
−πi + πs ≤ 0, ∀i, S, (51)
s∈S
for any given width wi , and a corresponding set S of item widths, indexed by s, such that
P
s∈S ws ≤ wi . Intuitively, from the primal point of view, these columns mean that an item
of a given size wi can be split, and used to fulfill the demand of smaller orders, provided
the sum of their widths is smaller than or equal to the initial size.
If we add at initialization time a set of dual-optimal inequalities to the dual problem,
πD ≤ d, we get the following primal-dual pair:
Les Cahiers du GERAD G–2004–30 21
min cx + dy max πb
s.t. Ax + Dy = b s.t. πA ≤ c
x, y ≥ 0 πD ≤ d
The motivation for the use of dual-optimal inequalities is the following: the dual space
is restricted during all the column generation iterations, but the new columns added in
the primal problem, relaxing the primal space, are not a problem, because eventually it
is possible to recover an optimal solution to the original problem. Valério de Carvalho
(2003) added a set of dual-optimal inequalities from sets S of small cardinality (|S| ≤ 2) :
from sets of cardinality 1, −πi + πi+1 ≤ 0, i = 1, . . . , m − 1, and from sets of cardinality 2,
inequalities of the type −πi + πj + πk ≤ 0, one for each value of i, generated using the
smallest index j (corresponds to largest width wj ), if such value existed, for which there
was a k such that wi ≥ wj + wk . The total number of dual-optimal inequalities is less
than 2m.
Computational experiments show a sensible reduction in the number of columns gener-
ated and degenerate iterations. The savings are more impressive in larger, more difficult in-
stances, when there is an explosion in the number of possible columns. For some instances,
the speed-up factor is approximately 4.5, and the percentage of degenerate iterations falls
from approximately 39.8% to about 8.5%. Ben Amor (2002) conducted a similar study on
the classical test problems and another set of more difficult test problems. Results show an
impressive reduction in the master problem cpu time and especially the number of column
generation iterations.
The dual-optimal inequalities correspond to cycles in the space of the arc-flow variables,
where exactly one arc (the one corresponding to item i in Equation (51)) is traversed in
the direction opposite to its orientation. Combining a path and a cycle produces a new
valid path. Valério de Carvalho (2003) provides an algorithm to drive the y variables to 0,
when they take a positive value in the optimal solution, retrieving a valid optimal primal
solution to the original CSP. Actually, this is not needed, if a perturbation technique is
used, that amounts to giving a ε cost to the cycles (Ben Amor et al. (2003)).
Even better results can be obtained if we use inequalities, that will be denoted as deep
dual-optimal inequalities, that cut portions of the dual optimal space, but preserve, at
least, one dual optimal solution (Ben Amor et al. (2003)). If a dual-optimal solution for
the problem, π̃ ∗ , is known in advance, the following stabilized primal and dual problems
can be used:
min cx − (π̃ ∗ − ∆)y1 + (π̃ ∗ + ∆)y2 max πb
s.t. Ax − y1 + y2 = b s.t. πA ≤ c
x ≥ 0, y1 ≥ 0, y2 ≥ 0 π̃ ∗ − ∆ ≤ π ≤ π̃ ∗ + ∆.
where ∆ > 0 ∈ Rm .
The stabilized dual problem is constructed in such a way that the dual solution is re-
stricted to a non-empty box strictly containing the known optimal dual solution. The
Les Cahiers du GERAD G–2004–30 22
stabilization method amounts to penalizing dual variables when they lie outside the pre-
defined box, and enforces the selection of a valid optimal primal solution of the original
problem (du Merle et al. (1999)). Computational experiments were run with instances, de-
noted as triplets (Beasley (1990)), because the optimal solution has exactly three items per
bin, which fulfill exactly its capacity. If an instance of the CSP has no loss at optimality,
the solution πi∗ = w i
WP , i ∈ I, is an optimal dual solution, because assigning these values to
the dual constraints i∈I aip πi ≤ 1 simply replicates the knapsack constraint used to build
the feasible
Pm patterns, and the corresponding dual objective function reaches the optimal
value i=1 bi wi /W (Ben Amor (1997)). The computational results are impressive. The
speed-up factor is approximately 10.0.
Computational results also show that convergence for different equivalent primal models
is similar provided that their dual optimal spaces are equally restricted. Ben Amor (2002)
compares two models for the CSP problem: in the first, the aggregated CSP, items of
the same size were aggregated in the same constraint, as is usually done, while, in the
second, the binary disaggregated CSP, items are considered in separate constraints, but
dual inequalities impose equal dual values for items of the same size. The number of
columns generated in both cases is remarkably similar, even though the models have very
different sizes.
Finally a proximal stabilized column generation algorithm proved to be very efficient in
solving CSP linear relaxation (Ben Amor (2002)). The key issue is that the dual vector π ∗
defined above is a good initial solution for difficult problems (those ones with very small
loss), and even it is not very close to a dual optimal solution for other problems, it may
still have the nice property that the distribution of its components is close to the one of
an optimal solution.
5 Extensions
One extension of the CSP is the multiple size lengths cutting stock problem (MLCSP) and
its counterpart, the variable sized bin packing problems (VSBPP). They are variants of the
standard problem in which large objects of different capacities are allowed. The arc flow
model can also be extended to formulate these problems. Using a Dantzig-Wolfe decompo-
sition, one obtains the machine balance problem formulation of Gilmore-Gomory (Valério
de Carvalho (2002)). A branch-and-price algorithm for a version with limited availabil-
ity of the bins, based on a master problem with columns of the reformulated model and
branching constraints based on the arc-flow variables, proved to be adequate for both the
MLCSP and the VSBPP (Alves and Valério de Carvalho (2003)).
Again, the advantage of aggregating items of the same size into a single group, as well
as of the aggregation of bins, reducing symmetry, enables solving in a few seconds all
the VSBPP instances proposed in (Monaci (2002)), in which a combinatorial enumeration
algorithms failed to solve 65 out of the 300 instances within a time limit of 900 seconds.
The larger instances have a maximum of 5 different types of bins and up to 500 items.
Other classes of instances with about 25 different item sizes and 14 different bin capacities
Les Cahiers du GERAD G–2004–30 23
were also solved in, on average, one second, approximately. The algorithm has been applied
to MLCSP instances, proposed in (Belov (2003)), which uses a forbidden columns dynamic
programming enumeration scheme, providing comparable results; it was able to solve 44
out of 50 instances with 4 different types of bins with capacities ranging from 5000 to
10000.
Another extension of the one-dimensional CSP is a version in which the number of
setups is minimized. It is a problem of great practical importance, because there are usually
significant setup cost associated to changing from one cutting pattern to another. This
problem is much more complex than the classical one-dimensional CSP, because additional
binary variables are needed to indicate when a given cutting pattern is selected, and the
resulting model has much larger duality gaps. This problem had only been previously
tackled with heuristics. A branch-and-price-and-cut algorithm was applied to instances
with up to 200 items. Computational results show that optimal solutions were obtained
in 12 out of 16 instances, while in the remaining, solutions were found within one unit of
optimality (Vanderbeck (2000a)).
6 Future research
The stabilization of the solution of the LP relaxation of the CSP produces impressive
results. Instances with a much larger number of different item sizes can now be tackled,
and their LP solutions and the corresponding bounds found in reasonable time. The
application of similar ideas to the branch-and-price phase requires further investigation.
The experience with the CSP also shows that it may be worthwhile to investigate and
characterize the structure of the dual optimal space of other integer-programming and
combinatorial-optimization problems.
Models with original variables provide additional insight, that can be used to derive
more balanced and powerful branching rules, as the ones that result from hyperplane
branching, and primal cuts expressed in terms of the original variables. Many integer pro-
gramming and combinatorial optimization problems can be represented as pure network
models, or network models with side constraints. Models with the columns of the refor-
mulated model with branching constraints based on the variables of the original variables
seem to be a promising approach for branch-and-price algorithm for this type of problems.
References
Ahuja, R., Magnanti, T., and Orlin, J. (1993). Network Flows: Theory, Algorithms and
Applications. Prentice Hall, Englewood Cliffs, New Jersey.
Barnhart, C., Johnson, E.L., Nemhauser, G., Savelsbergh, M., and Vance, P. (1998).
Branch-and-Price: Column generation for solving huge integer programs. Operations
Research, 46:316–329.
Beasley, J.E. (1990). Or-library: Distributing test problems by electronic mail. Journal of
the Operational Research Society, 41:1060–1072.
Belov, G. (2003). Problems, Models and Algorithms in One- and Two-Dimensional Cutting.
PhD thesis, Fakultät Mathematik und Naturwissenschaften der Technischen, Universität
Dresden, Dresden, Germany.
Ben Amor, H. (1997). Résolution du problème de découpe par génération de colonnes.
Master’s thesis, École Polytechnique de Montréal, Canada.
Ben Amor, H. (2002). Stabilisation de l’algorithme de génŕation de colonnes. PhD thesis,
École Polytechnique de Montréal, Canada.
Ben Amor, H., Desrosiers, J., and Valério de Carvalho, J.M. (2003). Dual-optimal in-
equalities for stabilized column generation. Les Cahiers du GERAD G-2003-20, HEC
Montréal, Canada.
Chan, L., Simchi-Levi, D., and Bramel, J. (1998). Worst case analysis, linear programming
and the bin-packing problem. Mathematical Programming, 83:213–227.
Dantzig, G. and Wolfe, P. (1960). Decomposition principle for linear programs. Operations
Research, 8:101–111.
Degraeve, Z. and Peeters, M. (2003). Optimal integer solutions to industrial cutting-stock
problems: Part 2, benchmark results. INFORMS Journal on Computing, 15:58–81.
Degraeve, Z. and Schrage, L. (1999). Optimal integer solutions to industrial cutting stock
problems. INFORMS Journal on Computing, 11:406–419.
Desrochers, M., Desrosiers, J., and Salomon, M. (1992). A new optimization algorithm for
the vehicle routing problem with time windows. Operations Research, 40:342–354.
Desrosiers, J., Dumas, Y., Salomon, M., and Soumis, F. (1995). Time constrained routing
and scheduling. In Handbooks in Operations Research & Management Science 8, Network
Routing, pages 35–139. Elsevier Science B. V.
Dowsland, K.A. and Dowsland, W.B. (1992). Packing problems. European Journal of
Operational Research, 56:2–14.
du Merle, O., Villeneuve, D., Desrosiers, J., and Hansen, P. (1999). Stabilized column
generation. Discrete Math., 194:229–237.
Dyckhoff, H. (1990). A typology of cutting and packing problems. European Journal of
Operational Research, 44:145–159.
Les Cahiers du GERAD G–2004–30 25
Dyckhoff, H. and Finke, U. (1992). Cutting and Packing in Production and Distribution:
a typology and bibliography. Physica-Verlag, Heidelberg.
Dyckhoff, H., Scheithauer, G., and Terno, J. (1997). Cutting and packing. In Annotated
Bibliographies in Combinatorial Optimization, pages 393–413. John Wiley and Sons,
Chichester.
Geoffrion, A. (1974). Lagrangian relaxation and its uses in integer programming. Mathe-
matical Programming Study, 2:82–114.
Gilmore, P. and Gomory, R. (1961). A linear programming approach to the cutting stock
problem. Operations Research, 9:849–859.
Gilmore, P. and Gomory, R. (1963). A linear programming approach to the cutting stock
problem – part II. Operations Research, 11:863–888.
Goulimis, C. (1990). Optimal solutions to the cutting stock problem. European Journal of
Operational Research, 44:197–208.
Helmberg, C. (1995). Cutting aluminium coils with high lengths variabilities. Annals of
Operations Research, 57:175–189.
Kelley Jr., J.E. (1961). The cutting-plane method for solving convex programs. J. Soc.
Ind. Appl. Math., 8(4):703–712.
Marcotte, O. (1985). The cutting stock problem and integer rounding. Mathematical
Programming, 33:82–92.
Marcotte, O. (1986). An instance of the cutting stock problem for which the rounding
property does not hold. Operations Research Letters, 4:239–243.
Martello, S. and Toth, P. (1990). Knapsack Problems: Algorithms and Computer Imple-
mentations. Wiley, New York.
Monaci, M. (2002). Algorithms for Packing and Scheduling Problems. PhD thesis, Univer-
sità Degli Studi di Bologna, Bologna, Italy.
Les Cahiers du GERAD G–2004–30 26
Rietz, J. and Scheithauer, G. (2002). Tighter bounds for the gap and non-IRUP construc-
tions in the one-dimensional cutting stock problem. Optimization, 51(6):927 – 963.
Scheithauer, G. and Terno, J. (1995). The modified integer round-up property of the one-
dimensional cutting stock problem. European Journal of Operational Research, 84:562–
571.
Scholl, P., Klein, R., and Juergens, C. (1997). Bison: a fast hybrid procedure for exactly
solving the one-dimensional bin-packing problem. Computers and Operations Research,
24:627–645.
Schwerin, P. and Wäscher, G. (1999). A new lower bound for the bin-packing problem and
its integration into MTP. Pesquisa Operacional, Special Issue on Cutting and Packing
Problems, 19:111–129.
Sessions, J., Layton R. and Guanda, L. (1988). Improving tree bucking decisions: A
network approach. The Compiler, 6:5–9.
Sessions, J., Olsen E. and Garland, J. (1989). Tree bucking for optimal stand value with
log allocation constraints. Forest World, 35:271–276.
Shapiro, J. (1968). Dynamic programming algorithms for the integer programming problem
I: The integer programming problem viewed as a knapsack type problem. Operations
Research, 16:103–121.
Simchi-Levi, D. (1994). New worst-case results for the bin-packing problem. Naval Research
Logistics, 41:579–585.
Valério de Carvalho, J.M. (1999). Exact solution of bin-packing problems using column
generation and branch-and-bound. Ann. Oper. Res., 86:629–659.
Valério de Carvalho, J.M. (2002). LP models for bin-packing and cutting stock problems.
European J. Oper. Res., 141(2):253–273.
Valério de Carvalho, J.M. (2003). Using extra dual cuts to accelerate column generation.
INFORMS Journal on Computing (in press).
Les Cahiers du GERAD G–2004–30 27
Valério de Carvalho, J.M. and Guimarães Rodrigues, A.J. (1995). An LP based approach to
a two–phase cutting stock problem. European Journal of Operational Research, 84:580–
589.
Vance, P. (1998). Branch-and-Price algorithms for the one-dimensional cutting stock prob-
lem. Computational Optimization and Applications, 9:211–228.
Vanderbeck, F. (2000a). Exact algorithm for minimising the number of setups in the
one-dimensional cutting stock problem. Operations Research, 48:915–926.
Wäscher, G. and Gau, T. (1996). Heuristics for the integer one-dimensional cutting stock
problem: a computational study. OR Spektrum, 18:131–144.