MOSEK
MOSEK
MOSEK
manual.
Version 6.0 (Revision 61).
ii
Published by MOSEK ApS, Denmark.
Copyright (c) 1998-2010 MOSEK ApS, Denmark. All rights reserved..
Disclaimer: MOSEK ApS (the author of MOSEK) accepts no responsibility for damages resulting
from the use of the MOSEK software and makes no warranty, neither expressed nor implied,
including, but not limited to, any implied warranty of tness for a particular purpose. The
software is provided as it is, and you, its user, assume all risks when using it.
Contact information
Phone +45 3917 9907
Fax +45 3917 9823
WEB http://www.mosek.com
Email sales@mosek.com Sales, pricing, and licensing.
support@mosek.com Technical support, questions and bug reports.
info@mosek.com Everything else.
Mail MOSEK ApS
C/O Symbion Science Park
Fruebjergvej 3, Box 16
2100 Copenhagen
Denmark
iii
iv
Contents
1 Changes and new features in MOSEK 3
1.1 Compilers used to build MOSEK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 General changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Optimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Interior point optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 The simplex optimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.3 Mixed-integer optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 License system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Other changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Platform changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 The MOSEK optimization tools 7
2.1 What is MOSEK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.1 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 How to use this manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Getting support and help 9
3.1 MOSEK documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Additional reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4 Using the MOSEK command line tool 11
4.1 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.2.1 Linear optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.2.2 Quadratic optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2.3 Conic optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3 Passing options to the command line tool . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.4 Reading and writing problem data les . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.4.1 Reading compressed data les . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.4.2 Converting from one format and to another . . . . . . . . . . . . . . . . . . . . . 18
4.5 Hot-start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.5.1 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.6 Further information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.7 Solution le ltering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
v
vi CONTENTS
5 MOSEK and AMPL 21
5.1 Invoking the AMPL shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 Applicability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.3 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.4 Determining the outcome of an optimization . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.5 Optimizer options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.5.1 The MOSEK parameter database . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.5.2 Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.6 Constraint and variable names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.7 Which solution is returned to AMPL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.8 Hot-start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.9 Sensitivity analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.10 Using the command line version of the AMPL interface . . . . . . . . . . . . . . . . . . 28
6 MOSEK and GAMS 29
7 MOSEK and MATLAB 31
8 Interfaces to MOSEK 33
8.1 The optimizer API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
9 Modelling 35
9.1 Linear optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
9.1.1 Duality for linear optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
9.1.2 Primal and dual infeasible case . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
9.2 Quadratic and quadratically constrained optimization . . . . . . . . . . . . . . . . . . . 38
9.2.1 A general recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
9.2.2 Reformulating as a separable quadratic problem . . . . . . . . . . . . . . . . . . 39
9.3 Conic optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
9.3.1 Duality for conic optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
9.3.2 Infeasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
9.3.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
9.3.4 Potential pitfalls in conic optimization . . . . . . . . . . . . . . . . . . . . . . . . 46
9.4 Nonlinear convex optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
9.4.1 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
9.5 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
9.5.1 Avoid near infeasible models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9.6 Examples continued . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9.6.1 The absolute value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9.6.2 The Markowitz portfolio model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
10 The optimizers for continuous problems 55
10.1 How an optimizer works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
10.1.1 Presolve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
10.1.2 Dualizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
10.1.3 Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
10.1.4 Using multiple CPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
CONTENTS vii
10.2 Linear optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
10.2.1 Optimizer selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
10.2.2 The interior-point optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
10.2.3 The simplex based optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
10.2.4 The interior-point or the simplex optimizer? . . . . . . . . . . . . . . . . . . . . . 63
10.2.5 The primal or the dual simplex variant? . . . . . . . . . . . . . . . . . . . . . . . 64
10.3 Linear network optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
10.3.1 Network ow problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
10.3.2 Embedded network problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
10.4 Conic optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.4.1 The interior-point optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.5 Nonlinear convex optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.5.1 The interior-point optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.6 Solving problems in parallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
10.6.1 Thread safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
10.6.2 The parallelized interior-point optimizer . . . . . . . . . . . . . . . . . . . . . . . 67
10.6.3 The concurrent optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
10.7 Understanding solution quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
10.7.1 The solution summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
11 The optimizer for mixed integer problems 73
11.1 Some notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
11.2 An important fact about integer optimization problems . . . . . . . . . . . . . . . . . . 74
11.3 How the integer optimizer works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
11.3.1 Presolve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
11.3.2 Heuristic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
11.3.3 The optimization phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
11.4 Termination criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
11.5 How to speed up the solution process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
11.6 Understanding solution quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
11.6.1 Solutionsummary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
12 The analyzers 79
12.1 The problem analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
12.1.1 General characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
12.1.2 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
12.1.3 Linear constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
12.1.4 Constraint and variable bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
12.1.5 Quadratic constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
12.1.6 Conic constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
12.2 Analyzing infeasible problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
12.2.1 Example: Primal infeasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
12.2.2 Locating the cause of primal infeasibility . . . . . . . . . . . . . . . . . . . . . . 84
12.2.3 Locating the cause of dual infeasibility . . . . . . . . . . . . . . . . . . . . . . . . 85
12.2.4 The infeasibility report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
12.2.5 Theory concerning infeasible problems . . . . . . . . . . . . . . . . . . . . . . . . 89
viii CONTENTS
12.2.6 The certicate of primal infeasibility . . . . . . . . . . . . . . . . . . . . . . . . . 89
12.2.7 The certicate of dual infeasibility . . . . . . . . . . . . . . . . . . . . . . . . . . 90
13 Feasibility repair 91
13.1 The main idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
13.2 Feasibility repair in MOSEK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
13.2.1 Usage of negative weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
13.2.2 Automatical naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
13.2.3 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
14 Sensitivity analysis 95
14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
14.2 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
14.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
14.4 Sensitivity analysis for linear problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
14.4.1 The optimal objective value function . . . . . . . . . . . . . . . . . . . . . . . . . 95
14.4.2 The basis type sensitivity analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 97
14.4.3 The optimal partition type sensitivity analysis . . . . . . . . . . . . . . . . . . . 97
14.4.4 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
14.5 Sensitivity analysis with the command line tool . . . . . . . . . . . . . . . . . . . . . . . 102
14.5.1 Sensitivity analysis specication le . . . . . . . . . . . . . . . . . . . . . . . . . 102
14.5.2 Example: Sensitivity analysis from command line . . . . . . . . . . . . . . . . . . 103
14.5.3 Controlling log output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
A MOSEK command line tool reference 105
A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
A.2 Command line arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
A.3 The parameter le . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
A.3.1 Using the parameter le . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
B The MPS le format 109
B.1 The MPS le format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
B.1.1 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
B.1.2 NAME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
B.1.3 OBJSENSE (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
B.1.4 OBJNAME (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
B.1.5 ROWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
B.1.6 COLUMNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
B.1.7 RHS (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
B.1.8 RANGES (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
B.1.9 QSECTION (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
B.1.10 BOUNDS (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
B.1.11 CSECTION (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
B.1.12 ENDATA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
B.2 Integer variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
B.3 General limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
B.4 Interpretation of the MPS format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
CONTENTS ix
B.5 The free MPS format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
C The LP le format 121
C.1 A warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
C.2 The LP le format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
C.2.1 The sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
C.2.2 LP format peculiarities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
C.2.3 The strict LP format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
C.2.4 Formatting of an LP le . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
D The OPF format 129
D.1 Intended use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
D.2 The le format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
D.2.1 Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
D.2.2 Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
D.2.3 Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
D.3 Parameters section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
D.4 Writing OPF les from MOSEK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
D.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
D.5.1 Linear example lo1.opf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
D.5.2 Quadratic example qo1.opf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
D.5.3 Conic quadratic example cqo1.opf . . . . . . . . . . . . . . . . . . . . . . . . . . 137
D.5.4 Mixed integer example milo1.opf . . . . . . . . . . . . . . . . . . . . . . . . . . 138
E The XML (OSiL) format 141
F The solution le format 143
F.1 The basic and interior solution les . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
F.2 The integer solution le . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
G The ORD le format 145
G.1 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
H Parameters reference 147
H.1 Parameter groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
H.1.1 Logging parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
H.1.2 Basis identication parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
H.1.3 The Interior-point method parameters. . . . . . . . . . . . . . . . . . . . . . . . . 149
H.1.4 Simplex optimizer parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
H.1.5 Primal simplex optimizer parameters. . . . . . . . . . . . . . . . . . . . . . . . . 153
H.1.6 Dual simplex optimizer parameters. . . . . . . . . . . . . . . . . . . . . . . . . . 154
H.1.7 Network simplex optimizer parameters. . . . . . . . . . . . . . . . . . . . . . . . 154
H.1.8 Nonlinear convex method parameters. . . . . . . . . . . . . . . . . . . . . . . . . 154
H.1.9 The conic interior-point method parameters. . . . . . . . . . . . . . . . . . . . . 155
H.1.10 The mixed-integer optimization parameters. . . . . . . . . . . . . . . . . . . . . . 155
H.1.11 Presolve parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
H.1.12 Termination criterion parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . 158
x CONTENTS
H.1.13 Progress call-back parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
H.1.14 Non-convex solver parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
H.1.15 Feasibility repair parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
H.1.16 Optimization system parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
H.1.17 Output information parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
H.1.18 Extra information about the optimization problem. . . . . . . . . . . . . . . . . . 163
H.1.19 Overall solver parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
H.1.20 Behavior of the optimization task. . . . . . . . . . . . . . . . . . . . . . . . . . . 165
H.1.21 Data input/output parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
H.1.22 Analysis parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
H.1.23 Solution input/output parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . 171
H.1.24 Infeasibility report parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
H.1.25 License manager parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
H.1.26 Data check parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
H.1.27 Debugging parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
H.2 Double parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
H.3 Integer parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
H.4 String parameter types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
I Symbolic constants reference 281
I.1 Constraint or variable access modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
I.2 Function opcode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
I.3 Function operand type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
I.4 Basis identication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
I.5 Bound keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
I.6 Species the branching direction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
I.7 Progress call-back codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
I.8 Types of convexity checks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
I.9 Compression types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
I.10 Cone types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
I.11 CPU type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
I.12 Data format types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
I.13 Double information items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
I.14 Double parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
I.15 Feasibility repair types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
I.16 License feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
I.17 Integer information items. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
I.18 Information item types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
I.19 Input/output modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
I.20 Integer parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
I.21 Language selection constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
I.22 Long integer information items. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
I.23 Mark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
I.24 Continuous mixed-integer solution type . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
I.25 Integer restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
I.26 Mixed-integer node selection types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
CONTENTS xi
I.27 MPS le format type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
I.28 Message keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
I.29 Network detection method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
I.30 Objective sense types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
I.31 On/o . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
I.32 Optimizer types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
I.33 Ordering strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
I.34 Parameter type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
I.35 Presolve method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
I.36 Problem data items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
I.37 Problem types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
I.38 Problem status keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
I.39 Interpretation of quadratic terms in MPS les . . . . . . . . . . . . . . . . . . . . . . . . 333
I.40 Response codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
I.41 Response code type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
I.42 Scaling type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
I.43 Scaling type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
I.44 Sensitivity types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
I.45 Degeneracy strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
I.46 Exploit duplicate columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
I.47 Hot-start type employed by the simplex optimizer . . . . . . . . . . . . . . . . . . . . . 354
I.48 Problem reformulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
I.49 Simplex selection strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
I.50 Solution items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
I.51 Solution status keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
I.52 Solution types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
I.53 Solve primal or dual form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
I.54 String parameter types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
I.55 Status keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
I.56 Starting point types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
I.57 Stream types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
I.58 Integer values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
I.59 Variable types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
I.60 XML writer output mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
J Problem analyzer examples 363
J.1 air04 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
J.2 arki001 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
J.3 Problem with both linear and quadratic constraints . . . . . . . . . . . . . . . . . . . . . 365
J.4 Problem with both linear and conic constraints . . . . . . . . . . . . . . . . . . . . . . . 367
xii CONTENTS
License agreement
Before using the MOSEK software, please read the license agreement available in the distribution at
mosek\6\license.pdf
1
2 CONTENTS
Chapter 1
Changes and new features in
MOSEK
The section presents improvements and new features added to MOSEK in version 6.0.
1.1 Compilers used to build MOSEK
MOSEK has been build with the compiler shown in Table 1.1.
Platform C compiler
linux32x86 Intel C 11.0 (gcc 4.3, glibc 2.3.4)
linux64x86 Intel C 11.0 (gcc 4.3, glibc 2.3.4)
osx32x86 Intel C 11.1 (gcc 4.0)
osx64x86 Intel C 11.1 (gcc 4.0)
solaris32x86 Sun Studio 12
solaris64x86 Sun Studio 12
win32x86 Intel C 11.0 (VS 2005)
win64x86 Intel C 11.0 (VS 2005)
Table 1.1: Compiler version used to build MOSEK
.
1.2 General changes
A problem analyzer is now available. It generates an simple report with of statisics and informa-
tion about the optimization problem and relevant warnings about the problem formulation are
included.
A solution analyzer is now available.
3
4 CHAPTER 1. CHANGES AND NEW FEATURES IN MOSEK
All timing measures are now wall clock times
MOSEK employs version 1.2.3 of the zlib library.
MOSEK employs version 11.6.1 of the FLEXnet licensing tools.
The convexity of quadratic and quadratic constrained optimization is checked explicitly.
On Windows all DLLs and EXEs are now signed.
On all platforms the Jar les are signed.
MOSEK no longer deals with ctrl-c. The user is responsible for terminating MOSEK in the
callback.
1.3 Optimizers
1.3.1 Interior point optimizer
The speed and stability of interior-point optimizer for linear problems has been improved.
The speed and stability of the interior-point optimizer for conic problems has been improved. In
particular, it is much better at dealing with primal or dual infeasible problems.
1.3.2 The simplex optimizers
Presolve is now much more eective for simplex optimizers hot-starts.
1.3.3 Mixed-integer optimizer
The stopping criteria for the mixed-integer optimizer have been changed to conform better with
industry standards.
1.4 License system
The license conditions have been relaxed, so that a license is shared among all tasks using a
single environment. This means that running several optimizations in parallel will only consume
one license, as long as the associated tasks share a single MOSEK environment. Please note this
is NOT useful when using the MATLAB parallel toolbox.
By default a license remains checked out for the lifetime of the environment. This behavior can
be changed using the parameter MSK IPAR CACHE LICENSE.
Flexlm has been upgraded to version 11.6 from version 11.4.
1.5 Other changes
The documentation has been improved.
1.6. INTERFACES 5
1.6 Interfaces
The AMPL interface has been augmented so it is possible to pass an initial (feasible) integer
solution to mixed-integer optimizer.
The AMPL interface is now capable of reading the constraint and variable names if they are
avialable.
1.7 Platform changes
MAC OSX on the PowerPC platform is no longer supported.
Solaris on the SPARC platform is no longer supported.
MAC OSX is supported on Intel 64 bit X86 i.e. osx64x86.
Add support for MATLAB R2009b.
6 CHAPTER 1. CHANGES AND NEW FEATURES IN MOSEK
Chapter 2
The MOSEK optimization tools
2.1 What is MOSEK
MOSEK is a software package for solving mathematical optimization problems.
The core of MOSEK consists of a number of optimizers that can solve various optimization prob-
lems. The problem clases MOSEK is designed to solve are:
Linear problems.
Conic quadratic problems. (also known as second order optimization).
General convex problems. In particular, MOSEK is wellsuited for:
Convex quadratic problems.
Convex quadratically constrained problems.
Geometric problems (posynomial case).
Integer problems, i.e. problems where some of the variables are constrained to integer values.
These problem classes can be solved using an appropriate optimizer built into MOSEK:
Interior-point optimizer for all continuous problems.
Primal or dual simplex optimizer for linear problems.
Conic interior-point optimizer for conic quadratic problems.
Mixed-integer optimizer based on a branch and cut technology.
All the optimizers available in MOSEK are built for solving large-scale sparse problems and have
been extensively tuned for stability and performance.
7
8 CHAPTER 2. THE MOSEK OPTIMIZATION TOOLS
2.1.1 Interfaces
There are several ways to interface with MOSEK:
Files:
MPS format: MOSEK reads the industry standard MPS le format for specifying (mixed
integer) linear optimization problems. Moreover an MPS le can also be used to specify
quadratic, quadratically constrained, and conic optimization problems.
LP format: MOSEK can read and write the CPLEX LP format with some restrictions.
OPF format: MOSEK also has its own text based format called OPF. The format is closely
related to the LP but is much more robust in its specication
APIs: MOSEK can also invoked from various programming languages.
C/C++,
C# (plus other .NET languages),
Delphi,
Java and
Python.
Thrid party programs:
AMPL: MOSEK can easily be used from the modeling language AMPL
1
which is a high-
level modeling language that makes it possible to formulate optimization problems in a
language close to the original pen and paper model formulation.
MATLAB: When using the MOSEK optimization toolbox for Matlab the functionality of
MOSEK can easily be used within MATLAB.
2.2 How to use this manual
This manual consists of two parts each consisting of several chapters.
The rst part consists of the Chapters 4 to 14 and is a Users guide which provides a quick
introduction to the usage of MOSEK. The last part consists of appendixes A - I is a reference manual
for the MOSEK command line tool, le formats and parameters.
1
See http://www.ampl.com for further information.
Chapter 3
Getting support and help
3.1 MOSEK documentation
For an overview of the available MOSEK documentation please see
mosek\6\help\index.html
in the distribution.
3.2 Additional reading
In this manual it is assumed that the reader is familiar with mathematics and in particular math-
ematical optimization. Some introduction to linear programming is found in books such as Linear
programming by Chvatal [12] or Computer Solution of Linear Programs by Nazareth [18]. For more
theoretical aspects see e.g. Nonlinear programming: Theory and algorithms by Bazaraa, Shetty, and
Sherali [10]. Finally, the book Model building in mathematical programming by Williams [22] pro-
vides an excellent introduction to modeling issues in optimization.
Another useful resource is Mathematical Programming Glossary available at
http://glossary.computing.society.informs.org
9
10 CHAPTER 3. GETTING SUPPORT AND HELP
Chapter 4
Using the MOSEK command line
tool
This chapter introduces the MOSEK command line tool which allows the user to solve optimization
problems specied in a text le. The main reasons to use the command line tool are
to solve small problems by hand, and
as a debugging tool for large problems generated by other programs.
4.1 Getting started
The syntax for the mosek command line tool is
mosek [options] filename
[options] are some options which modify the behavior of MOSEK such as whether the optimization
problem is minimized or maximized. filename is the name of the le which contains the problem
data. E.g the
mosek -min afiro.mps
command line tells MOSEK to read data from the afiro.mps le and to minimize the objective
function.
By default the solution to the optimization problem is stored in the les afiro.sol and afiro.bas.
The .sol and .bas les contains the interior and basis solution respectively. For problems with integer
variables the solution is written to a le with the extension .int.
For a complete list of command line parameters type
mosek -h
or see Appendix A.
11
12 CHAPTER 4. USING THE MOSEK COMMAND LINE TOOL
4.2 Examples
Using several examples we will subsequently demonstrate how to use the MOSEK command line tool.
4.2.1 Linear optimization
A linear optimization problem is a problem where a linear objective function is optimized subject to
linear constraints. An example of a linear optimization problem is
minimize 10x
1
9x
2
,
subject to 7/10x
1
+ 1x
2
630,
1/2x
1
+ 5/6x
2
600,
1x
1
+ 2/3x
2
708,
1/10x
1
+ 1/4x
2
135,
x
1
, x
2
0.
(4.1)
The solution of the example (4.1) using MOSEK consists of three steps:
Creating an input le describing the problem.
Optimizing the problem using MOSEK.
Viewing the solution reports.
The input le for MOSEK is a plain text le containing a description of the problem and it must
be in either the MPS, the LP, or the OPF format. Below we present the example encoded as an OPF
le:
[comment]
Example lo1.mps converted to OPF.
[/comment]
[hints]
# Give a hint about the size of the different elements in the problem.
# These need only be estimates, but in this case they are exact.
[hint NUMVAR] 2 [/hint]
[hint NUMCON] 4 [/hint]
[hint NUMANZ] 8 [/hint]
[/hints]
[variables]
# All variables that will appear in the problem
x1 x2
[/variables]
[objective minimize obj]
- 10 x1 - 9 x2
[/objective]
4.2. EXAMPLES 13
[constraints]
[con c1] 0.7 x1 + x2 <= 630 [/con]
[con c2] 0.5 x1 + 0.8333333333 x2 <= 600 [/con]
[con c3] x1 + 0.66666667 x2 <= 708 [/con]
[con c4] 0.1 x1 + 0.25 x2 <= 135 [/con]
[/constraints]
[bounds]
# By default all variables are free. The following line will
# change this to all variables being nonnegative.
[b] 0 <= * [/b]
[/bounds]
For details on the syntax of the OPF format please consult Appendix D.
After the input le has been created, the problem can be optimized. Assuming that the input le
has been given the name lo1.opf, then the problem is optimized using the command line
mosek lo1.opf
Two solution report les lo1.sol and lo1.bas are generated where the rst le contains the interior
solution and the second le contains the basic solution. In this case the lo1.bas le has the format:
NAME : EXAMPLE
PROBLEM STATUS : PRIMAL_AND_DUAL_FEASIBLE
SOLUTION STATUS : OPTIMAL
OBJECTIVE NAME : obj
PRIMAL OBJECTIVE : -7.66799999e+003
DUAL OBJECTIVE : -7.66799999e+003
CONSTRAINTS
INDEX NAME AT ACTIVITY LOWER LIMIT UPPER LIMIT DUAL LOWER DUAL UPPER
1 c1 UL 6.30000000e+002 NONE 6.30000000e+002 0.00000000e+000 4.37499996e+000
2 c2 BS 4.80000000e+002 NONE 6.00000000e+002 0.00000000e+000 0.00000000e+000
3 c3 UL 7.08000000e+002 NONE 7.08000000e+002 0.00000000e+000 6.93750003e+000
4 c4 BS 1.17000000e+002 NONE 1.35000000e+002 0.00000000e+000 0.00000000e+000
VARIABLES
INDEX NAME AT ACTIVITY LOWER LIMIT UPPER LIMIT DUAL LOWER DUAL UPPER
1 x1 BS 5.39999998e+002 0.00000000e+000 NONE 0.00000000e+000 0.00000000e+000
2 x2 BS 2.52000001e+002 0.00000000e+000 NONE 0.00000000e+000 0.00000000e+000
The interpretation of the solution le should be obvious. E.g the optimal values of x1 and x2 are
539.99 and 252.00 respectively. A detailed discussion of the solution le format is given in Appendix
F.
4.2.2 Quadratic optimization
An example of a quadratic optimization problem is
minimize x
2
1
+ 0.1x
2
2
+ x
2
3
x
1
x
3
x
2
subject to 1 x
1
+ x
2
+ x
3
,
x 0.
(4.2)
The problem is a quadratic optimization problem because all the constraints are linear and the objective
can be stated on the form
0.5x
T
Qx + c
T
x
14 CHAPTER 4. USING THE MOSEK COMMAND LINE TOOL
where in this particular case we have that
Q =
_
_
2 0 1
0 0.2 0
1 0 2
_
_
and c =
_
_
0
1
0
_
_
. (4.3)
MOSEK assumes that Q is symmetric and positive semi-denite. If these assumptions are not
satised, MOSEK will most likely not compute a valid solution. Recall a matrix is symmetric if it
satises the condition
Q = Q
T
and it is positive semi-denite if
x
T
Qx 0, for all x.
An OPF le specifying the example can have the format:
[comment]
Example qo1.mps converted to OPF.
[/comment]
[hints]
[hint NUMVAR] 3 [/hint]
[hint NUMCON] 1 [/hint]
[hint NUMANZ] 3 [/hint]
[/hints]
[variables]
x1 x2 x3
[/variables]
[objective minimize obj]
# The quadratic terms are often multiplied by 1/2,
# but this is not required.
- x2 + 0.5 ( 2 x1 ^ 2 - 2 x3 * x1 + 0.2 x2 ^ 2 + 2 x3 ^ 2 )
[/objective]
[constraints]
[con c1] 1 <= x1 + x2 + x3 [/con]
[/constraints]
[bounds]
[b] 0 <= * [/b]
[/bounds]
Please note that the quadratic terms in objective are stated very naturally in the OPF format as
follows
- x2 + 0.5 ( 2 x1 ^ 2 - 2 x3 * x1 + 0.2 x2 ^ 2 + 2 x3 ^ 2 )
4.2. EXAMPLES 15
The example is solved using the
mosek qo1.opf
command line. In this case only one solution le named qo1.sol is produced. A .bas le is only
produced for linear problems.
4.2.3 Conic optimization
Conic optimization is a generalization of linear optimization which allows the formulation of nonlinear
convex optimization problems.
The main idea in conic optimization is to include constraints of the form
x
t
(
in the optimization problem where x
t
consists of a subset of the variables and ( is a convex cone.
Recall that ( is a convex cone if and only if ( is a convex set and
x ( x ( for all 0.
MOSEK cannot handle arbitrary conic constraints, only the two types
_
_
_
x R
n+1
: x
1
_
n+1
j=2
x
2
j
_
_
_
(4.4)
and _
_
_
x R
n+2
: 2x
1
x
2
n+2
j=2
x
2
j
, x
1
, x
2
0
_
_
_
. (4.5)
(4.4) is called a quadratic cone whereas (4.5) is called a rotated quadratic cone.
Consider the problem
minimize 1x
1
+ 2x
2
subject to
1
x1
+
2
x2
5,
x 0
(4.6)
which may not initially look like a conic optimization problem. It can however be reformulated to
minimize 1x
1
+ 2x
2
subject to 2x
3
+ 4x
4
= 5,
x
2
5
2x
1
x
3
,
x
2
6
2x
2
x
4
,
x
5
= 1,
x
6
= 1,
x 0.
(4.7)
Problem (4.6) and problem (4.7) are equivalent because the constraints of (4.7)
x
2
5
x
1
=
1
x
1
2x
3
and
x
2
6
x
2
1
x
2
2x
4
16 CHAPTER 4. USING THE MOSEK COMMAND LINE TOOL
and hence
1
x
1
+
2
x
2
2x
3
+ 4x
4
= 5.
The problem (4.7) is a conic quadratic optimization problem.
Using the MOSEK OPF format the problem can be represented as follows:
[comment]
Example cqo1.mps converted to OPF.
[/comment]
[hints]
[hint NUMVAR] 6 [/hint]
[hint NUMCON] 1 [/hint]
[hint NUMANZ] 2 [/hint]
[/hints]
[variables]
x1 x2 x3 x4 x5 x6
[/variables]
[objective minimize obj]
x1 + 2 x2
[/objective]
[constraints]
[con c1] 2 x3 + 4 x4 = 5 [/con]
[/constraints]
[bounds]
# We let all variables default to the positive orthant
[b] 0 <= * [/b]
# ... and change those that differ from the default.
[b] x5,x6 = 1 [/b]
# We define two rotated quadratic cones
# k1: 2 x1 * x3 >= x5^2
[cone rquad k1] x1, x3, x5 [/cone]
# k2: 2 x2 * x4 >= x6^2
[cone rquad k2] x2, x4, x6 [/cone]
[/bounds]
For details on the OPF format please consult Appendix D. Finally, the example can be solved using
the
mosek cqo1.opf
command line call and the solution can be studied by inspecting the cqo1.sol le.
4.3. PASSING OPTIONS TO THE COMMAND LINE TOOL 17
Format name Standard Read Write File type File extension Reference
OPF No Yes Yes ASCII/UTF8 opf Appendix D
MPS Yes Yes Yes ASCII mps Appendix B
LP Partially Yes Yes ASCII lp Appendix C
OSiL XML Yes No Yes ASCII/UTF8 xml Appendix E
Binary task format No Yes Yes Binary mbt
Table 4.1: Supported le formats.
4.3 Passing options to the command line tool
It is possible to modify the behavior of MOSEK by setting appropriate parameters. E.g assume that
a linear optimization problem should be solved with the primal simplex optimizer rather than the
default interior-point optimizer. This is done by setting the parameter MSK IPAR OPTIMIZER to the
value MSK OPTIMIZER PRIMAL SIMPLEX. To accomplish this append
-d MSK_IPAR_OPTIMIZER MSK_OPTIMIZER_PRIMAL_SIMPLEX.
to the command line. E.g the command
mosek -d MSK_IPAR_OPTIMIZER MSK_OPTIMIZER_PRIMAL_SIMPLEX lo1.opf
solves the problem specied by lo1.opf using the primal simplex optimizer. For further information
on the parameters available in MOSEK please see Appendix H.
4.4 Reading and writing problem data les
MOSEK reads and writes problem data les in the formats presented in Table 4.1. The columns of
Table 4.1 show:
The name of the format.
Whether the format is an industry standard format.
If the format can be read by MOSEK.
If the format can be written by MOSEK.
The generic le type of the format, i.e. ASCII, UTF8, or binary.
The le extension for the format
The location of information about the format.
18 CHAPTER 4. USING THE MOSEK COMMAND LINE TOOL
4.4.1 Reading compressed data les
MOSEK can read and write data les compressed with gzip
1
For mosek to recognize a le as a gzip compressed le it must have the extension .gz. E.g the
command
mosek myfile.mps.gz
will automatically decompress the le while reading it.
4.4.2 Converting from one format and to another
It is possible to use MOSEK to convert a problem le from one format to another. For instance assume
the MPS le myprob.mps should be converted to an LP le named myprob.lp. This can be achieved
with the command
mosek myprob.mps -out myprob.lp -x
Converting an MPS le to a LP le and back to an MPS le permutes the rows and columns of
the original problem; this has no inuence on the problem, but variables and constraints may appear
in a dierent order.
4.5 Hot-start
Often a sequence of closely related optimization problems has to be solved. In such a case it can
be expected that a previous optimal solution can serve as a good starting point when the modied
problem is optimized.
Currently, only the simplex optimizer and the mixed-integer optimizer in MOSEK can exploit a
guess for the optimal solution. The simplex optimizer can exploit an arbitrary guess for the optimal
solution whereas the mixed-integer optimizer requires a feasible integer solution. For both the simplex
optimizer and the mixed-integer optimizer it holds that if the guess is good then the optimizer may
be able to reduce the solution time signicantly when exploiting the guess.
4.5.1 An example
Assume that the example
minimize c
1
x
1
9x
2
,
subject to 7/10x
1
+ 1x
2
630,
1/2x
1
+ 5/6x
2
600,
1x
1
+ 2/3x
2
708,
1/10x
1
+ 1/4x
2
135,
x
1
, x
2
0.
(4.8)
should be solved for c
1
identical to 5 and 10. Clearly, a solution for one c
1
value will also be feasible
for another value. Therefore, it might be worthwhile to exploit the previous optimal solution when
reoptimizing the problem.
1
gzip is a public domain compression format. For further details about gzip consult http://www.gzip.org
4.6. FURTHER INFORMATION 19
Assume that two MPS les have been created each corresponding to one of the c
1
values then the
commands
2
mosek lo1.mps -baso .\lo1.bas
mosek lo1-b.mps -basi .\lo1.bas -baso .\lo1-b.bas
-d MSK_IPAR_OPTIMIZER MSK_OPTIMIZER_PRIMAL_SIMPLEX
demonstrates how to exploit the previous optimal solution in the second optimization.
In the rst line MOSEK optimizes the rst version of the optimization problem where c
1
is identical
to 10. The -baso .\lo1.bas command line option makes sure that the optimal basic solution is
written to the le .\lo1.bas.
In the second line the second instance of the problem is optimized. The -basi .\lo1.bas command
line option forces MOSEK to read the previous optimal solution which MOSEK will try to exploit
automatically. The -baso .\lo1-b.bas command line option makes sure that the optimal basic
solution is written to the .\lo1-b.bas le. Finally, the
-d MSK_IPAR_OPTIMIZER MSK_OPTIMIZER_PRIMAL_SIMPLEX
command line option makes sure that the primal simplex optimizer is used for the reoptimization. This
is important because the interior-point optimizer used by default does not exploit a previous optimal
solution.
4.6 Further information
Additional information about the MOSEK command line tool is available in Appendix A.
4.7 Solution le ltering
The MOSEK solution les can be very space consuming for large problems. One way to cut down the
solution le size is only to include variables which optimal value is in a certain interesting range i.e
[0.01, 0.99]. This can be done by setting the MOSEK parameters
MSK_SPAR_SOL_FILTER_XX_LOW 0.01
MSK_SPAR_SOL_FILTER_XX_UPR 0.99
For further details consult the parameters MSK SPAR SOL FILTER XC LOW and MSK SPAR SOL FILTER XC UPR.
2
The second line should not be broken into two separate lines.
20 CHAPTER 4. USING THE MOSEK COMMAND LINE TOOL
Chapter 5
MOSEK and AMPL
AMPL is a modeling language for specifying linear and nonlinear optimization models in a natural
way. AMPL also makes it easy to solve the problem and e.g. display the solution or part of it.
We will not discuss the specics of the AMPL language here but instead refer the reader to [13]
and the AMPL website http://www.ampl.com.
AMPL cannot solve optimization problems by itself but requires a link to an appropriate optimizer
such as MOSEK. The MOSEK distribution includes an AMPL link which makes it possible to use
MOSEK as an optimizer within AMPL.
5.1 Invoking the AMPL shell
The MOSEK distribution by default comes with the AMPL shell installed. To invoke the AMPL shell
type:
mampl
5.2 Applicability
It is possible to specify problems in AMPL that cannot be solved by MOSEK. The optimization
problem must be a smooth convex optimization problem as discussed in Section 9.4.
5.3 An example
In many instances, you can successfully apply MOSEK simply by specifying the model and data,
setting the solver option to MOSEK, and typing solve. First to invoke the AMPL shell type:
mampl
when the AMPL shell has started type the commands:
ampl: model diet.mod;
ampl: data diet.dat;
21
22 CHAPTER 5. MOSEK AND AMPL
Value Message
0 the solution is optimal.
100 suboptimal primal solution.
101 superoptimal (dual feasible) solution.
150 the solution is near optimal.
200 primal infeasible problem.
300 dual infeasible problem.
400 too many iterations.
500 solution status is unknown.
501 ill-posed problem, solution status is unknown.
501 The value - 501 is a MOSEK response code.
See Appendix I.40 for all MOSEK response codes.
Table 5.1: Interpretation of solve result num.
ampl: option solver mosek;
ampl: solve;
The resulting output is:
MOSEK finished.
Problem status - PRIMAL_AND_DUAL_FEASIBLE
Solution status - OPTIMAL
Primal objective - 14.8557377
Dual objective - 14.8557377
Objective = Total_Cost
5.4 Determining the outcome of an optimization
The AMPL parameter solve result num is used to indicate the outcome of the optimization process.
It is used as follows
ampl: display solve_result_num
Please refer to table 5.1 for possible values of this parameter.
5.5 Optimizer options
5.5.1 The MOSEK parameter database
The MOSEK optimizer has options and parameters controlling such things as the termination criterion
and which optimizer is used. These parameters can be modied within AMPL as shown in the example
below:
ampl: model diet.mod;
ampl: data diet.dat;
5.6. CONSTRAINT AND VARIABLE NAMES 23
ampl: option solver mosek;
ampl: option mosek_options
ampl? msk_ipar_optimizer = msk_optimizer_primal_simplex \
ampl? msk_ipar_sim_max_iterations = 100000;
ampl: solve;
In the example above a string called mosek options is created which contains the parameter settings.
Each parameter setting has the format
parameter name = value
where parameter name can be any valid MOSEK parameter name. See Appendix H for a description
of all valid MOSEK parameters.
An alternative way of specifying the options is
ampl: option mosek_options
ampl? msk_ipar_optimizer = msk_optimizer_primal_simplex
ampl? msk_ipar_sim_max_iterations = 100000;
New options can also be appended to an existing option string as shown below
ampl: option mosek_options $mosek_options
ampl? msk_ipar_sim_print_freq = 0 msk_ipar_sim_max_iterations = 1000;
The expression $mosek_options expands to the current value of the option. Line two in the example
appends an additional value msk ipar sim max iterations to the option string.
5.5.2 Options
5.5.2.1 outlev
MOSEK also recognizes the outlev option which controls the amount of printed output. 0 means no
printed output and a higher value means more printed output. An example of setting outlev is as
follows:
ampl: option mosek_options outlev=2;
5.5.2.2 wantsol
MOSEK recognize the option wantsol. We refer the reader to the AMPL manual [13] for details about
this option.
5.6 Constraint and variable names
AMPL assigns meaningfull names to all the constraints and variables. Since MOSEK uses item names
in error and log messages, it may be useful to pass the AMPL names to MOSEK. Using the command
ampl: option mosek_auxfiles rc;
before the
solve;
command makes MOSEK obtain the constraint and variable names automatically.
24 CHAPTER 5. MOSEK AND AMPL
5.7 Which solution is returned to AMPL
The MOSEK optimizer can produce three types of solutions: basic, integer, and interior point solutions.
For nonlinear problems only an interior solution is available. For linear optimization problems opti-
mized by the interior-point optimizer with basis identication turned on both a basic and an interior
point solution are calculated. The simplex algorithm produces only a basic solution. Whenever both
an interior and a basic solution are available, the basic solution is returned. For problems containing
integer variables, the integer solution is returned to AMPL.
5.8 Hot-start
Frequently, a sequence of optimization problems is solved where each problem diers only slightly from
the previous problem. In that case it may be advantageous to use the previous optimal solution to
hot-start the optimizer. Such a facility is available in MOSEK only when the simplex optimizer is
used.
The hot-start facility exploits the AMPL variable sux sstatus to communicate the optimal basis
back to AMPL, and AMPL uses this facility to communicate an initial basis to MOSEK. The following
example demonstrates this feature.
ampl: model diet.mod;
ampl: data diet.dat;
ampl: option solver mosek;
ampl: option mosek_options
ampl? msk_ipar_optimizer = msk_optimizer_primal_simplex outlev=2;
ampl: solve;
ampl: display Buy.sstatus;
ampl: solve;
The resulting output is:
Accepted: msk_ipar_optimizer = MSK_OPTIMIZER_PRIMAL_SIMPLEX
Accepted: outlev = 2
Computer - Platform : Linux/64-X86
Computer - CPU type : Intel-P4
MOSEK - task name :
MOSEK - objective sense : min
MOSEK - problem type : LO (linear optimization problem)
MOSEK - constraints : 7 variables : 9
MOSEK - integer variables : 0
Optimizer started.
Simplex optimizer started.
Presolve started.
Linear dependency checker started.
Linear dependency checker terminated.
Presolve - Stk. size (kb) : 0
Eliminator - tries : 0 time : 0.00
Eliminator - elims : 0
Lin. dep. - tries : 1 time : 0.00
5.8. HOT-START 25
Lin. dep. - number : 0
Presolve terminated. Time: 0.00
Primal simplex optimizer started.
Primal simplex optimizer setup started.
Primal simplex optimizer setup terminated.
Optimizer - solved problem : the primal
Optimizer - constraints : 7 variables : 9
Optimizer - hotstart : no
ITER DEGITER(%) PFEAS DFEAS POBJ DOBJ TIME TOTTIME
0 0.00 1.40e+03 NA 1.2586666667e+01 NA 0.00 0.01
3 0.00 0.00e+00 NA 1.4855737705e+01 NA 0.00 0.01
Primal simplex optimizer terminated.
Simplex optimizer terminated. Time: 0.00.
Optimizer terminated. Time: 0.01
Return code - 0 [MSK_RES_OK]
MOSEK finished.
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal objective : 14.8557377
Dual objective : 14.8557377
Objective = Total_Cost
Buy.sstatus [*] :=
Quarter Pounder w/ Cheese bas
McLean Deluxe w/ Cheese low
Big Mac low
Filet-O-Fish low
McGrilled Chicken low
Fries, small bas
Sausage McMuffin low
1% Lowfat Milk bas
Orange Juice low
;
Accepted: msk_ipar_optimizer = MSK_OPTIMIZER_PRIMAL_SIMPLEX
Accepted: outlev = 2
Basic solution
Problem status : UNKNOWN
Solution status : UNKNOWN
Primal - objective: 1.4855737705e+01 eq. infeas.: 3.97e+03 max bound infeas.: 2.00e+03
Dual - objective: 0.0000000000e+00 eq. infeas.: 7.14e-01 max bound infeas.: 0.00e+00
Computer - Platform : Linux/64-X86
Computer - CPU type : Intel-P4
MOSEK - task name :
MOSEK - objective sense : min
MOSEK - problem type : LO (linear optimization problem)
MOSEK - constraints : 7 variables : 9
MOSEK - integer variables : 0
Optimizer started.
Simplex optimizer started.
26 CHAPTER 5. MOSEK AND AMPL
Presolve started.
Presolve - Stk. size (kb) : 0
Eliminator - tries : 0 time : 0.00
Eliminator - elims : 0
Lin. dep. - tries : 0 time : 0.00
Lin. dep. - number : 0
Presolve terminated. Time: 0.00
Primal simplex optimizer started.
Primal simplex optimizer setup started.
Primal simplex optimizer setup terminated.
Optimizer - solved problem : the primal
Optimizer - constraints : 7 variables : 9
Optimizer - hotstart : yes
Optimizer - Num. basic : 7 Basis rank : 7
Optimizer - Valid bas. fac. : no
ITER DEGITER(%) PFEAS DFEAS POBJ DOBJ TIME TOTTIME
0 0.00 0.00e+00 NA 1.4855737705e+01 NA 0.00 0.01
0 0.00 0.00e+00 NA 1.4855737705e+01 NA 0.00 0.01
Primal simplex optimizer terminated.
Simplex optimizer terminated. Time: 0.00.
Optimizer terminated. Time: 0.01
Return code - 0 [MSK_RES_OK]
MOSEK finished.
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal objective : 14.8557377
Dual objective : 14.8557377
Objective = Total_Cost
Please note that the second solve takes fewer iterations since the previous optimal basis is reused.
5.9 Sensitivity analysis
MOSEK can calculate sensitivity information for the objective and constraints. To enable sensitivity
information set the option:
sensitivity = 1
Results are returned in variable/constraint suxes as follows:
.down Smallest value of objective coecient/right hand side before the optimal basis changes.
.up Largest value of objective coecient/right hand side before the optimal basis changes.
.current Current value of objective coecient/right hand side.
For ranged constraints sensitivity information is returned only for the lower bound.
The example below returns sensitivity information on the diet model.
5.9. SENSITIVITY ANALYSIS 27
ampl: model diet.mod;
ampl: data diet.dat;
ampl: option solver mosek;
ampl: option mosek_options sensitivity=1;
ampl: solve;
#display sensitivity information and current solution.
ampl: display _var.down,_var.current,_var.up,_var;
#display sensitivity information and optimal dual values.
ampl: display _con.down,_con.current,_con.up,_con;
The resulting output is:
Return code - 0 [MSK_RES_OK]
MOSEK finished.
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal objective : 14.8557377
Dual objective : 14.8557377
suffix up OUT;
suffix down OUT;
suffix current OUT;
Objective = Total_Cost
: _var.down _var.current _var.up _var :=
1 1.37385 1.84 1.86075 4.38525
2 1.8677 2.19 Infinity 0
3 1.82085 1.84 Infinity 0
4 1.35466 1.44 Infinity 0
5 1.57633 2.29 Infinity 0
6 0.094 0.77 0.794851 6.14754
7 1.22759 1.29 Infinity 0
8 0.57559 0.6 0.910769 3.42213
9 0.657279 0.72 Infinity 0
;
ampl: display _con.down,_con.current,_con.up,_con;
: _con.down _con.current _con.up _con :=
1 -Infinity 2000 3965.37 0
2 297.6 350 375 0.0277049
3 -Infinity 55 172.029 0
4 63.0531 100 195.388 0.0267541
5 -Infinity 100 132.213 0
6 -Infinity 100 234.221 0
7 17.6923 100 142.821 0.0248361
;
28 CHAPTER 5. MOSEK AND AMPL
5.10 Using the command line version of the AMPL interface
AMPL can generate a data le containing all the optimization problem and all relevant information
which can then be read and solved by the MOSEK command line tool.
When the problem has been loaded into AMPL, the commands
ampl: option auxfiles rc;
ampl: write bprob;
will make AMPL write the appropriate data les, i.e.
prob.nl
prob.col
prob.row
Then the problem can be solved using the command line version of MOSEK as follows
mosek prob.nl outlev=10 -a
The -a command line option indicates that MOSEK is invoked in AMPL mode. When MOSEK is
invoked in AMPL mode the normal MOSEK command line options should appear after the -a option
except for the le name which should be the rst argument. As the above example demonstrates
MOSEK accepts command line options as specied by the AMPL convention. Which command line
arguments MOSEK accepts in AMPL mode can be viewed by executing
mosek -= -a
For linear, quadratic and quadratic constrained problems a text le representation of the problem
can be obtained using one of the commands
mosek prob.nl -a -x -out prob.mps
mosek prob.nl -a -x -out prob.opf
mosek prob.nl -a -x -out prob.lp
Chapter 6
MOSEK and GAMS
It is possible to call MOSEK from the GAMS modeling language . In order to do so a special
GAMS/MOSEK link must be obtained from the GAMS Corporation.
29
30 CHAPTER 6. MOSEK AND GAMS
Chapter 7
MOSEK and MATLAB
The MOSEK optimization toolbox for MATLAB is an easy to use interface to MOSEK that makes it
possible to use MOSEK from within MATLAB.
The optimization toolbox is included in the MOSEK optimization tools distribution. See the
separate documentation for the MATLAB toolbox for details.
31
32 CHAPTER 7. MOSEK AND MATLAB
Chapter 8
Interfaces to MOSEK
8.1 The optimizer API
The MOSEK optimizer API is an ecient interface to the optimizers implemented in MOSEK. E.g the
interface makes it possible to call the linear optimizer from a C++ or Java program. The optimizer
API is available for the languages
C/C++/Delphi.
Java.
.NET (Visual Basic, C#, Managed C++, etc).
Python.
Further details about the optimizer APIs are available at
mosek\6\help\index.html
or online at
http://www.mosek.com/documentation/
33
34 CHAPTER 8. INTERFACES TO MOSEK
Chapter 9
Modelling
In this chapter we will discuss the following issues:
The formal denitions of the problem types that MOSEK can solve.
The solution information produced by MOSEK.
The information produced by MOSEK if the problem is infeasible.
A set of examples showing dierent ways of formulating commonly occurring problems so that
they can be solved by MOSEK.
Recommendations for formulating optimization problems.
9.1 Linear optimization
A linear optimization problem can be written as
minimize c
T
x + c
f
subject to l
c
Ax u
c
,
l
x
x u
x
,
(9.1)
where
m is the number of constraints.
n is the number of decision variables.
x R
n
is a vector of decision variables.
c R
n
is the linear part of the objective function.
A R
mn
is the constraint matrix.
l
c
R
m
is the lower limit on the activity for the constraints.
u
c
R
m
is the upper limit on the activity for the constraints.
35
36 CHAPTER 9. MODELLING
l
x
R
n
is the lower limit on the activity for the variables.
u
x
R
n
is the upper limit on the activity for the variables.
A primal solution (x) is (primal) feasible if it satises all constraints in (9.1). If (9.1) has at least
one primal feasible solution, then (9.1) is said to be (primal) feasible.
In case (9.1) does not have a feasible solution, the problem is said to be (primal) infeasible.
9.1.1 Duality for linear optimization
Corresponding to the primal problem (9.1), there is a dual problem
maximize (l
c
)
T
s
c
l
(u
c
)
T
s
c
u
+(l
x
)
T
s
x
l
(u
x
)
T
s
x
u
+ c
f
subject to A
T
y + s
x
l
s
x
u
= c,
y + s
c
l
s
c
u
= 0,
s
c
l
, s
c
u
, s
x
l
, s
x
u
0.
(9.2)
If a bound in the primal problem is plus or minus innity, the corresponding dual variable is xed at
0, and we use the convention that the product of the bound value and the corresponding dual variable
is 0. E.g.
l
x
j
= (s
x
l
)
j
= 0 and l
x
j
(s
x
l
)
j
= 0.
This is equivalent to removing variable (s
x
l
)
j
from the dual problem.
A solution
(y, s
c
l
, s
c
u
, s
x
l
, s
x
u
)
to the dual problem is feasible if it satises all the constraints in (9.2). If (9.2) has at least one feasible
solution, then (9.2) is (dual) feasible, otherwise the problem is (dual) infeasible.
We will denote a solution
(x, y, s
c
l
, s
c
u
, s
x
l
, s
x
u
)
so that x is a solution to the primal problem (9.1), and
(y, s
c
l
, s
c
u
, s
x
l
, s
x
u
)
is a solution to the corresponding dual problem (9.2). A solution which is both primal and dual feasible
is denoted a primal-dual feasible solution.
9.1.1.1 A primal-dual feasible solution
Let
(x
, y
, (s
c
l
)
, (s
c
u
)
, (s
x
l
)
, (s
x
u
)
)
be a primal-dual feasible solution, and let
(x
c
)
:= Ax
.
9.1. LINEAR OPTIMIZATION 37
For a primal-dual feasible solution we dene the optimality gap as the dierence between the primal
and the dual objective value,
c
T
x
+ c
f
((l
c
)
T
s
c
l
(u
c
)
T
s
c
u
+ (l
x
)
T
s
x
l
(u
x
)
T
s
x
u
+ c
f
)
=
m
i=1
((s
c
l
)
i
((x
c
i
)
l
c
i
) + (s
c
u
)
i
(u
c
i
(x
c
i
)
) +
n
j=1
((s
x
l
)
j
(x
j
l
x
j
) + (s
x
u
)
j
(u
x
j
x
j
))
0
where the rst relation can be obtained by multiplying the dual constraints (9.2) by x and x
c
respec-
tively, and the second relation comes from the fact that each term in each sum is nonnegative. It
follows that the primal objective will always be greater than or equal to the dual objective.
We then dene the duality gap as the dierence between the primal objective value and the dual
objective value, i.e.
c
T
x
+ c
f
((l
c
)
T
s
c
l
(u
c
)
T
s
c
u
+ (l
x
)
T
s
x
l
(u
x
)
T
s
x
u
+ c
f
)
Please note that the duality gap will always be nonnegative.
9.1.1.2 An optimal solution
It is well-known that a linear optimization problem has an optimal solution if and only if there exist
feasible primal and dual solutions so that the duality gap is zero, or, equivalently, that the comple-
mentarity conditions
(s
c
l
)
i
((x
c
i
)
l
c
i
) = 0, i = 1, . . . , m,
(s
c
u
)
i
(u
c
i
(x
c
i
)
) = 0, i = 1, . . . , m,
(s
x
l
)
j
(x
j
l
x
j
) = 0, j = 1, . . . , n,
(s
x
u
)
j
(u
x
j
x
j
) = 0, j = 1, . . . , n
are satised.
If (9.1) has an optimal solution and MOSEK solves the problem successfully, both the primal and
dual solution are reported, including a status indicating the exact state of the solution.
9.1.1.3 Primal infeasible problems
If the problem (9.1) is infeasible (has no feasible solution), MOSEK will report a certicate of primal
infeasibility: The dual solution reported is a certicate of infeasibility, and the primal solution is
undened.
A certicate of primal infeasibility is a feasible solution to the modied dual problem
maximize (l
c
)
T
s
c
l
(u
c
)
T
s
c
u
+ (l
x
)
T
s
x
l
(u
x
)
T
s
x
u
subject to A
T
y + s
x
l
s
x
u
= 0,
y + s
c
l
s
c
u
= 0,
s
c
l
, s
c
u
, s
x
l
, s
x
u
0.
(9.3)
so that the objective is strictly positive, i.e. a solution
(y
, (s
c
l
)
, (s
c
u
)
, (s
x
l
)
, (s
x
u
)
)
to (9.3) so that
(l
c
)
T
(s
c
l
)
(u
c
)
T
(s
c
u
)
+ (l
x
)
T
(s
x
l
)
(u
x
)
T
(s
x
u
)
> 0.
38 CHAPTER 9. MODELLING
Such a solution implies that (9.3) is unbounded, and that its dual is infeasible.
We note that the dual of (9.3) is a problem which constraints are identical to the constraints of
the original primal problem (9.1): If the dual of (9.3) is infeasible, so is the original primal problem.
9.1.1.4 Dual infeasible problems
If the problem (9.2) is infeasible (has no feasible solution), MOSEK will report a certicate of dual
infeasibility: The primal solution reported is a certicate of infeasibility, and the dual solution is
undened.
A certicate of dual infeasibility is a feasible solution to the problem
minimize c
T
x
subject to Ax x
c
= 0,
l
c
x
c
u
c
,
l
x
x u
x
(9.4)
where
l
c
i
=
_
0, if l
c
i
> ,
otherwise
and u
c
i
:=
_
0, if u
c
i
< ,
otherwise
and
l
x
j
=
_
0, if l
x
j
> ,
otherwise
and u
x
j
:=
_
0, if u
x
j
< ,
otherwise
so that the objective value c
T
x is negative. Such a solution implies that (9.4) is unbounded, and that
the dual of (9.4) is infeasible.
We note that the dual of (9.4) is a problem which constraints are identical to the constraints of
the original dual problem (9.2): If the dual of (9.4) is infeasible, so is the original dual problem.
9.1.2 Primal and dual infeasible case
In case that both the primal problem (9.1) and the dual problem (9.2) are infeasible, MOSEK will
report only one of the two possible certicates which one is not dened (MOSEK returns the rst
certicate found).
9.2 Quadratic and quadratically constrained optimization
A convex quadratic optimization problem is an optimization problem of the form
minimize
1
2
x
T
Q
o
x + c
T
x + c
f
subject to l
c
k
1
2
x
T
Q
k
x +
n1
j=0
a
k,i
x
j
u
c
k
, k = 0, . . . , m1,
l
x
x u
x
, j = 0, . . . , n 1,
(9.5)
where the convexity requirement implies that
Q
o
is a symmetric positive semi-denite matrix.
If l
c
k
= , then Q
k
is a symmetric positive semi-denite matrix.
9.2. QUADRATIC AND QUADRATICALLY CONSTRAINED OPTIMIZATION 39
If u
c
k
= , then Q
k
is a symmetric negative semi-denite matrix.
If l
k
> and u
k
k
< , then Q
k
is a zero matrix.
The convexity requirement is very important and it is strongly recommended that MOSEK is
applied to convex problems only.
9.2.1 A general recommendation
Any convex quadratic optimization problem can be reformulated as a conic optimization problem.
It is our experience that for the majority of practical applications it is better to cast them as conic
problems because
the resulting problem is convex by construction, and
the conic optimizer is more ecient than the optimizer for general quadratic problems.
See Section 9.3.3.1 for further details.
9.2.2 Reformulating as a separable quadratic problem
The simplest quadratic optimization problem is
minimize 1/2x
T
Qx + c
T
x
subject to Ax = b,
x 0.
(9.6)
The problem (9.6) is said to be a separable problem if Q is a diagonal matrix or, in other words, if the
quadratic terms in the objective all have this form
x
2
j
instead of this form
x
j
x
i
.
The separable form has the following advantages:
It is very easy to check the convexity assumption, and
the simpler structure in a separable problem usually makes it easier to solve.
It is well-known that a positive semi-denite matrix Q can always be factorized, i.e. a matrix F
exists so that
Q = F
T
F. (9.7)
In many practical applications of quadratic optimization F is known explicitly; e.g. if Q is a covariance
matrix, F is the set of observations producing it.
Using (9.7), the problem (9.6) can be reformulated as
minimize 1/2y
T
Iy + c
T
x
subject to Ax = b,
Fx y = 0,
x 0.
(9.8)
40 CHAPTER 9. MODELLING
The problem (9.8) is also a quadratic optimization problem and has more constraints and variables
than (9.6). However, the problem is separable. Normally, if F has fewer rows than columns, it is
worthwhile to reformulate as a separable problem. Indeed consider the extreme case where F has one
dense row and hence Q will be a dense matrix.
The idea presented above is applicable to quadratic constraints too. Now, consider the constraint
1/2x
T
(F
T
F)x b (9.9)
where F is a matrix and b is a scalar. (9.9) can be reformulated as
1/2y
T
Iy b,
Fx y = 0.
It should be obvious how to generalize this idea to make any convex quadratic problem separable.
Next, consider the constraint
1/2x
T
(D + F
T
F)x b
where D is a positive semi-denite matrix, F is a matrix, and b is a scalar. We assume that D has a
simple structure, e.g. that D is a diagonal or a block diagonal matrix. If this is the case, it may be
worthwhile performing the reformulation
1/2((x
T
Dx) + y
T
Iy) b,
Fx y = 0.
Now, the question may arise: When should a quadratic problem be reformulated to make it sepa-
rable or near separable? The simplest rule of thumb is that it should be reformulated if the number
of non-zeros used to represent the problem decreases when reformulating the problem.
9.3 Conic optimization
Conic optimization can be seen as a generalization of linear optimization. Indeed a conic optimization
problem is a linear optimization problem plus a constraint of the form
x (
where ( is a convex cone. A complete conic problem has the form
minimize c
T
x + c
f
subject to l
c
Ax u
c
,
l
x
x u
x
,
x (.
(9.10)
The cone ( can be a Cartesian product of p convex cones, i.e.
( = (
1
(
p
in which case x ( can be written as
x = (x
1
, . . . , x
p
), x
1
(
1
, . . . , x
p
(
p
9.3. CONIC OPTIMIZATION 41
where each x
t
R
nt
. Please note that the n-dimensional Euclidean space R
n
is a cone itself, so simple
linear variables are still allowed.
MOSEK supports only a limited number of cones, specically
( = (
1
(
p
where each (
t
has one of the following forms
R set:
(
t
= x R
n
t
.
Quadratic cone:
(
t
=
_
_
_
x R
n
t
: x
1
_
n
t
j=2
x
2
j
_
_
_
.
Rotated quadratic cone:
(
t
=
_
_
_
x R
n
t
: 2x
1
x
2
n
t
j=3
x
2
j
, x
1
, x
2
0
_
_
_
.
Although these cones may seem to provide only limited expressive power they can be used to model
a large range of problems as demonstrated in Section 9.3.3.
9.3.1 Duality for conic optimization
The dual problem corresponding to the conic optimization problem (9.10) is given by
maximize (l
c
)
T
s
c
l
(u
c
)
T
s
c
u
+(l
x
)
T
s
x
l
(u
x
)
T
s
x
u
+ c
f
subject to A
T
y + s
x
l
s
x
u
+ s
x
n
= c,
y + s
c
l
s
c
u
= 0,
s
c
l
, s
c
u
, s
x
l
, s
x
u
0,
s
x
n
(
(9.11)
where the dual cone (
= (
1
(
p
where each (
t
is the dual cone of (
t
. For the cone types MOSEK can handle, the relation between the
primal and dual cone is given as follows:
R set:
(
t
=
_
x R
n
t
_
(
t
:=
_
s R
n
t
: s = 0
_
.
Quadratic cone:
(
t
:=
_
_
_
x R
n
t
: x
1
_
n
t
j=2
x
2
j
_
_
_
(
t
= (
t
.
42 CHAPTER 9. MODELLING
Rotated quadratic cone:
(
t
:=
_
_
_
x R
n
t
: 2x
1
x
2
n
t
j=3
x
2
j
, x
1
, x
2
0
_
_
_
. (
t
= (
t
.
Please note that the dual problem of the dual problem is identical to the original primal problem.
9.3.2 Infeasibility
In case MOSEK nds a problem to be infeasible it reports a certicate of the infeasibility. This works
exactly as for linear problems (see Sections 9.1.1.3 and 9.1.1.4).
9.3.3 Examples
This section contains several examples of inequalities and problems that can be cast as conic optimiza-
tion problems.
9.3.3.1 Quadratic objective and constraints
From Section 9.2.2 we know that any convex quadratic problem can be stated on the form
minimize 0.5 |Fx|
2
+ c
T
x,
subject to 0.5 |Gx|
2
+ a
T
x b,
(9.12)
where F and G are matrices and c and a are vectors. For simplicity we assume that there is only
one constraint, but it should be obvious how to generalize the methods to an arbitrary number of
constraints.
Problem (9.12) can be reformulated as
minimize 0.5 |t|
2
+ c
T
x,
subject to 0.5 |z|
2
+ a
T
x b,
Fx t = 0,
Gx z = 0
(9.13)
after the introduction of the new variables t and z. It is easy to convert this problem to a conic
quadratic optimization problem, i.e.
minimize v + c
T
x,
subject to p + a
T
x = b,
Fx t = 0,
Gx z = 0,
w = 1,
q = 1,
|t|
2
2vw, v, w 0,
|z|
2
2pq, p, q 0.
(9.14)
In this case we can model the last two inequalities using rotated quadratic cones.
9.3. CONIC OPTIMIZATION 43
If we assume that F is a non-singular matrix e.g. a diagonal matrix then
x = F
1
t
and hence we can eliminate x from the problem to obtain:
minimize v + c
T
F
1
t,
subject to p + a
T
F
1
t = b,
GF
1
t z = 0,
w = 1,
q = 1,
|t|
2
2vw, v, w 0,
|z|
2
2pq, p, q 0.
(9.15)
In most cases MOSEK performs this reduction automatically during the presolve phase before the
optimization is performed.
9.3.3.2 Minimizing a sum of norms
The next example is the problem of minimizing a sum of norms, i.e. the problem
minimize
k
i=1
_
_
x
i
_
_
subject to Ax = b,
(9.16)
where
x :=
_
_
x
1
.
.
.
x
k
_
_.
This problem is equivalent to
minimize
k
i=1
z
i
subject to Ax = b,
_
_
x
i
_
_
z
i
, i = 1, . . . , k,
(9.17)
which in turn is equivalent to
minimize
k
i=1
z
i
subject to Ax = b,
(z
i
, x
i
) (
i
, i = 1, . . . , k
(9.18)
where all (
i
are of the quadratic type, i.e.
(
i
:=
_
(z
i
, x
i
) : z
i
_
_
x
i
_
_
_
.
44 CHAPTER 9. MODELLING
The dual problem corresponding to (9.18) is
maximize b
T
y
subject to A
T
y + s = c,
t
i
= 1, i = 1, . . . , k,
(t
i
, s
i
) (
i
, i = 1, . . . , k
(9.19)
where
s :=
_
_
s
1
.
.
.
s
k
_
_.
This problem is equivalent to
maximize b
T
y
subject to A
T
y + s = c,
_
_
s
i
_
_
2
2
1, i = 1, . . . , k.
(9.20)
Please note that in this case the dual problem can be reduced to an ordinary convex quadratically
constrained optimization problem due to the special structure of the primal problem. In some cases
it turns out that it is much better to solve the dual problem (9.19) rather than the primal problem
(9.18).
9.3.3.3 Modelling polynomial terms using conic optimization
Generally an arbitrary polynomial term of the form
fx
g
cannot be represented with conic quadratic constraints, however in the following we will demonstrate
some special cases where it is possible.
A particular simple polynomial term is the reciprocal, i.e.
1
x
.
Now, a constraint of the form
1
x
y
where it is required that x > 0 is equivalent to
1 xy and x > 0
which in turn is equivalent to
z =
2,
z
2
2xy.
The last formulation is a conic constraint plus a simple linear equality.
9.3. CONIC OPTIMIZATION 45
E.g., consider the problem
minimize c
T
x
subject to
n
j=1
fj
xj
b,
x 0,
where it is assumed that f
j
> 0 and b > 0. This problem is equivalent to
minimize c
T
x
subject to
n
j=1
f
j
z
j
= b,
v
j
=
2, j = 1, . . . , n,
v
2
j
2z
j
x
j
, j = 1, . . . , n,
x, z 0,
(9.21)
because
v
2
j
= 2 2z
j
x
j
implies that
1
x
j
z
j
and
n
j=1
f
j
x
j
j=1
f
j
z
j
= b.
The problem (9.21) is a conic quadratic optimization problem having n 3-dimensional rotated
quadratic cones.
The next example is the constraint
x [t[,
x 0,
where both t and x are variables. This set is identical to the set
t
2
2xz,
z = 0.5,
x, z, 0.
(9.22)
Occasionally, when modeling the market impact term in portfolio optimization, the polynomial
term x
3
2
occurs. Therefore, consider the set dened by the inequalities
x
1.5
t,
0 x.
(9.23)
We will exploit that x
1.5
= x
2
/
x,
then we have the desired result since this implies that
x
1.5
=
x
2
x
x
2
2s
t.
46 CHAPTER 9. MODELLING
Please note that s can be chosen freely and that
x = 2s is a valid choice.
Let
x
2
2st,
w
2
2vr,
x = v,
s = w,
r =
1
8
,
s, t, v, r 0,
(9.25)
then
s
2
= w
2
2vr
=
v
4
=
x
4
.
Moreover,
x
2
2st,
2
_
x
4
t
leading to the conclusion that
x
1.5
t.
(9.25) is a conic reformulation which is equivalent to (9.23). Please note that the x 0 constraint
does not appear explicitly in (9.24) and (9.25), but implicitly since x = v 0.
Finally, it should be mentioned that any polynomial term of the form x
g
where g is a positive
rational number can be represented using conic quadratic constraints [2, pp. 12-13]
9.3.3.4 Further reading
If you want to learn more about what can be modeled as a conic optimization problem we recommend
the references [2, 11, 16].
9.3.4 Potential pitfalls in conic optimization
While a linear optimization problem either has a bounded optimal solution or is infeasible, the conic
case is not as simple as that.
9.3.4.1 Non-attainment in the primal problem
Consider the example
minimize z
subject to 2yz x
2
,
x =
2,
y, z 0,
(9.26)
which corresponds to the problem
minimize
1
y
subject to y 0.
(9.27)
Clearly, the optimal objective value is zero but it is never attained because implicitly we assume that
the optimal y is nite.
9.3. CONIC OPTIMIZATION 47
9.3.4.2 Non-attainment in the dual problem
Next, consider the example
minimize x
4
subject to x
3
+ x
4
= 1,
x
1
= 0,
x
2
= 1,
2x
1
x
2
x
2
3
,
x
1
, x
2
0,
(9.28)
which has the optimal solution
x
1
= 0, x
2
= 1, x
3
= 0 and x
4
= 1
implying that the optimal primal objective value is 1.
Now, the dual problem corresponding to (9.28) is
maximize y
1
+ y
3
subject to y
2
+ s
1
= 0,
y
3
+ s
2
= 0,
y
1
+ s
3
= 0,
y
1
= 1,
2s
1
s
2
s
2
3
,
s
1
, s
2
0.
(9.29)
Therefore,
y
1
= 1
and
s
3
= 1.
This implies that
2s
1
s
2
(s
3
)
2
= 1
and hence s
2
> 0. Given this fact we can conclude that
y
1
+ y
3
= 1 s
2
< 1
implying that the optimal dual objective value is 1, however, this is never attained. Hence, no primal-
dual bounded optimal solution with zero duality gap exists. Of course it is possible to nd a primal-dual
feasible solution such that the duality gap is close to zero, but then s
1
will be similarly large. This is
likely to make the problem (9.28) hard to solve.
An inspection of the problem (9.28) reveals the constraint x
1
= 0, which implies that x
3
= 0. If
we either add the redundant constraint
x
3
= 0
to the problem (9.28) or eliminate x
1
and x
3
from the problem it becomes easy to solve.
48 CHAPTER 9. MODELLING
9.4 Nonlinear convex optimization
MOSEK is capable of solving smooth (twice dierentiable) convex nonlinear optimization problems of
the form
minimize f(x) + c
T
x
subject to g(x) + Ax x
c
= 0,
l
c
x
c
u
c
,
l
x
x u
x
,
(9.30)
where
m is the number of constraints.
n is the number of decision variables.
x R
n
is a vector of decision variables.
x
c
R
m
is a vector of constraints or slack variables.
c R
n
is the linear part objective function.
A R
mn
is the constraint matrix.
l
c
R
m
is the lower limit on the activity for the constraints.
u
c
R
m
is the upper limit on the activity for the constraints.
l
x
R
n
is the lower limit on the activity for the variables.
u
x
R
n
is the upper limit on the activity for the variables.
f : R
n
R is a nonlinear function.
g : R
n
R
m
is a nonlinear vector function.
This means that the ith constraint has the form
l
c
i
g
i
(x) +
n
j=1
a
i,j
x
j
u
c
i
when the x
c
i
variable has been eliminated.
The linear term Ax is not included in g(x) since it can be handled much more eciently as a
separate entity when optimizing.
The nonlinear functions f and g must be smooth in all x [l
x
; u
x
]. Moreover, f(x) must be a
convex function and g
i
(x) must satisfy
l
c
i
= g
i
(x) is convex,
u
c
i
= g
i
(x) is concave,
< l
c
i
u
c
i
< g
i
(x) = 0.
9.5. RECOMMENDATIONS 49
9.4.1 Duality
So far, we have not discussed what happens when MOSEK is used to solve a primal or dual infeasible
problem. In the following section these issues are addressed.
Similar to the linear case, MOSEK reports dual information in the general nonlinear case. Indeed
in this case the Lagrange function is dened by
L(x
c
, x, y, s
c
l
, s
c
u
, s
x
l
, s
x
u
) := f(x) + c
T
x + c
f
y
T
(Ax + g(x) x
c
)
(s
c
l
)
T
(x
c
l
c
) (s
c
u
)
T
(u
c
x
c
)
(s
x
l
)
T
(x l
x
) (s
x
u
)
T
(u
x
x).
and the dual problem is given by
maximize L(x
c
, x, y, s
c
l
, s
c
u
, s
x
l
, s
x
u
)
subject to
(x
c
,x)
L(x
c
, x, y, s
c
l
, s
c
u
, s
x
l
, s
x
u
) = 0,
s
c
l
, s
c
u
, s
x
l
, s
x
u
0.
which is equivalent to
maximize f(x) y
T
g(x) x
T
(f(x)
T
g(x)
T
y)
+((l
c
)
T
s
c
l
(u
c
)
T
s
c
u
+ (l
x
)
T
s
x
l
(u
x
)
T
s
x
u
+ c
f
subject to f(x)
T
+ A
T
y +g(x)
T
y + s
x
l
s
x
u
= c,
y + s
c
l
s
c
u
= 0,
s
c
l
, s
c
u
, s
x
l
, s
x
u
0.
(9.31)
9.5 Recommendations
Often an optimization problem can be formulated in several dierent ways, and the exact formulation
used may have a signicant impact on the solution time and the quality of the solution. In some cases
the dierence between a good and a bad formulation means the ability to solve the problem or
not.
Below is a list of several issues that you should be aware of when developing a good formulation.
1. Sparsity is very important. The constraint matrix A is assumed to be a sparse matrix, where
sparse means that it contains many zeros (typically less than 10% non-zeros). Normally, when
A is sparser, less memory is required to store the problem and it can be solved faster.
2. Avoid large bounds as these can introduce all sorts of numerical problems. Assume that a variable
x
j
has the bounds
0.0 x
j
1.0e16.
The number 1.0e16 is large and it is very likely that the constraint x
j
1.0e16 is non-binding
at optimum, and therefore that the bound 1.0e16 will not cause problems. Unfortunately, this is
a nave assumption because the bound 1.0e16 may actually aect the presolve, the scaling, the
computation of the dual objective value, etc. In this case the constraint x
j
0 is likely to be
sucient, i.e. 1.0e16 is just a way of representing innity.
3. Avoid large penalty terms in the objective, i.e. do not have large terms in the linear part of the
objective function. They will most likely cause numerical problems.
50 CHAPTER 9. MODELLING
4. On a computer all computations are performed in nite precision, which implies that
1 = 1 +
where is about 10
16
. This means that the results of all computations are truncated and
therefore causing rounding errors. The upshot is that very small numbers and very large numbers
should be avoided, e.g. it is recommended that all elements in A either are zero or belong to the
interval [10
6
, 10
6
]. The same holds for the bounds and the linear objective.
5. Decreasing the number of variables or constraints does not necessarily make it easier to solve
a problem. In certain cases, i.e. in nonlinear optimization, it may be a good idea to introduce
more constraints and variables if it makes the model separable. Furthermore, a big but sparse
problem may be advantageous compared to a smaller but denser problem.
6. Try to avoid linearly dependent rows among the linear constraints. Network ow problems
and multi-commodity network ow problems, for example, often contain one or more linearly
dependent rows.
7. Finally, it is recommended to consult some of the papers about preprocessing to get some ideas
about ecient formulations. See e.g. [3, 4, 14, 15].
9.5.1 Avoid near infeasible models
Consider the linear optimization problem
minimize
subject to x + y 10
10
+ ,
1.0e4x + 2.0e4y 10
6
,
x, y 0.
(9.32)
Clearly, the problem is feasible for = 0. However, for = 1.0e 10 the problem is infeasible.
This implies that an insignicant change in the right side of the constraints makes the problem status
switch from feasible to infeasible. Such a model should be avoided.
9.6 Examples continued
9.6.1 The absolute value
Assume that we have a constraint for the form
[f
T
x + g[ b (9.33)
where x R
n
is a vector of variables, and f R
n
and g, b R are constants.
It is easy to verify that the constraint (9.33) is equivalent to
b f
T
x + g t b (9.34)
which is a set of ordinary linear inequality constraints.
9.6. EXAMPLES CONTINUED 51
Please note that equalities involving an absolute value such as
[x[ = 1
cannot be formulated as a linear or even a as convex nonlinear optimization problem. It requires
integer constraints.
9.6.2 The Markowitz portfolio model
In this section we will show how to model several versions of the Markowitz portfolio model using conic
optimization.
The Markowitz portfolio model deals with the problem of selecting a portfolio of assets, i.e. stocks,
bonds, etc. The goal is to nd a portfolio such that for a given return the risk is minimized. The
assumptions are:
A portfolio can consist of n traded assets numbered 1, 2, . . . held over a period of time.
w
0
j
is the initial holding of asset j where
j
w
0
j
> 0.
r
j
is the return on asset j and is assumed to be a random variable. r has a known mean r and
covariance .
The variable x
j
denotes the amount of asset j traded in the given period of time and has the following
meaning:
If x
j
> 0, then the amount of asset j is increased (by purchasing).
If x
j
< 0, then the amount of asset j is decreased (by selling).
The model deals with two central quantities:
Expected return:
E[r
T
(w
0
+ x)] = r
T
(w
0
+ x).
Variance (Risk):
V [r
T
(w
0
+ x)] = (w
0
+ x)
T
(w
0
+ x).
By denition is positive semi-denite and
Std. dev. =
_
_
_
1
2
(w
0
+ x)
_
_
_
=
_
_
L
T
(w
0
+ x)
_
_
where L is any matrix such that
= LL
T
A low rank of is advantageous from a computational point of view. A valid L can always be computed
as the Cholesky factorization of .
52 CHAPTER 9. MODELLING
9.6.2.1 Minimizing variance for a given return
In our rst model we want to minimize the variance while selecting a portfolio with a specied expected
target return t. Additionally, the portfolio must satisfy the budget (self-nancing) constraint asserting
that the total amount of assets sold must equal the total amount of assets purchased. This is expressed
in the model
minimize V [r
T
(w
0
+ x)]
subject to E[r
T
(w
0
+ x)] = t,
e
T
x = 0,
(9.35)
where e := (1, . . . , 1)
T
. Using the denitions above this may be formulated as a quadratic optimization
problem:
minimize (w
0
+ x)
T
(w
0
+ x)
subject to r
T
(w
0
+ x) = t,
e
T
x = 0.
(9.36)
9.6.2.2 Conic quadratic reformulation
An equivalent conic quadratic reformulation is given by:
minimize f
subject to
1
2
(w
0
+ x) g = 0,
r
T
(w
0
+ x) = t,
e
T
x = 0,
f |g| .
(9.37)
Here we minimize the standard deviation instead of the variance. Please note that
1
2
can be replaced
by any matrix L where = LL
T
. A low rank L is computationally advantageous.
9.6.2.3 Transaction costs with market impact term
We will now expand our model to include transaction costs as a fraction of the traded volume. [1, pp.
445-475] argues that transaction costs can be modeled as follows
commission +
bid
ask
spread +
trade volume
daily volume
, (9.38)
and that it is important to incorporate these into the model.
In the following we deal with the last of these terms denoted the market impact term. If you sell
(buy) a lot of assets the price is likely to go down (up). This can be captured in the market impact
term
trade volume
daily volume
m
j
_
[x
j
[.
The and daily volume have to be estimated in some way, i.e.
m
j
=
daily volume
9.6. EXAMPLES CONTINUED 53
has to be estimated. The market impact term gives the cost as a fraction of daily traded volume ([x
j
[).
Therefore, the total cost when trading an amount x
j
of asset j is given by
[x
j
[(m
j
[x
j
[
1
2
).
This leads us to the model:
minimize f
subject to
1
2
(w
0
+ x) g = 0,
r
T
(w
0
+ x) = t,
e
T
x + e
T
y = 0,
[x
j
[(m
j
[x
j
[
1
2
) y
j
,
f |g| .
(9.39)
Now, dening the variable transformation
y
j
= m
j
y
j
we obtain
minimize f
subject to
1
2
(w
0
+ x) g = 0,
r
T
(w
0
+ x) = t,
e
T
x + m
T
y = 0,
[x
j
[
3/2
y
j
,
f |g| .
(9.40)
As shown in Section 9.3.3.3 the set
[x
j
[
3/2
y
j
can be modeled by
x
j
z
j
,
x
j
z
j
,
z
2
j
2s
j
y
j
,
u
2
j
2v
j
q
j
,
z
j
= v
j
,
s
j
= u
j
,
q
j
=
1
8
,
q
j
, s
j
, y
j
, v
j
, q
j
0.
(9.41)
9.6.2.4 Further reading
For further reading please see [17] in particular, and [20] and [1], which also contain relevant material.
54 CHAPTER 9. MODELLING
Chapter 10
The optimizers for continuous
problems
The most essential part of MOSEK is the optimizers. Each optimizer is designed to solve a particular
class of problems i.e. linear, conic, or general nonlinear problems. The purpose of the present chapter
is to discuss which optimizers are available for the continuous problem classes and how the performance
of an optimizer can be tuned, if needed.
This chapter deals with the optimizers for continuous problems with no integer variables.
10.1 How an optimizer works
When the optimizer is called, it roughly performs the following steps:
Presolve: Preprocessing to reduce the size of the problem.
Dualizer: Choosing whether to solve the primal or the dual form of the problem.
Scaling: Scaling the problem for better numerical stability.
Optimize: Solve the problem using selected method.
The rst three preprocessing steps are transparent to the user, but useful to know about for tuning
purposes. In general, the purpose of the preprocessing steps is to make the actual optimization more
ecient and robust.
10.1.1 Presolve
Before an optimizer actually performs the optimization the problem is preprocessed using the so-called
presolve. The purpose of the presolve is to
remove redundant constraints,
eliminate xed variables,
remove linear dependencies,
55
56 CHAPTER 10. THE OPTIMIZERS FOR CONTINUOUS PROBLEMS
substitute out free variables, and
reduce the size of the optimization problem in general.
After the presolved problem has been optimized the solution is automatically postsolved so that the
returned solution is valid for the original problem. Hence, the presolve is completely transparent. For
further details about the presolve phase, please see [3, 4].
It is possible to ne-tune the behavior of the presolve or to turn it o entirely. If presolve consumes
too much time or memory compared to the reduction in problem size gained it may be disabled. This
is done by setting the parameter MSK IPAR PRESOLVE USE to MSK PRESOLVE MODE OFF.
The two most time-consuming steps of the presolve are
the eliminator, and
the linear dependency check.
Therefore, in some cases it is worthwhile to disable one or both of these.
10.1.1.1 Eliminator
The purpose of the eliminator is to eliminate free and implied free variables from the problem using
substitution. For instance, given the constraints
y =
j
x
j
,
y, x 0,
y is an implied free variable that can be substituted out of the problem, if deemed worthwhile.
If the eliminator consumes too much time or memory compared to the reduction in problem size
gained it may be disabled. This can be done with the parameter MSK IPAR PRESOLVE ELIMINATOR USE
to MSK OFF.
10.1.1.2 Linear dependency checker
The purpose of the linear dependency check is to remove linear dependencies among the linear equal-
ities. For instance, the three linear equalities
x
1
+ x
2
+ x
3
= 1,
x
1
+ 0.5x
2
= 0.5,
0.5x
2
+ x
3
= 0.5
contain exactly one linear dependency. This implies that one of the constraints can be dropped without
changing the set of feasible solutions. Removing linear dependencies is in general a good idea since it
reduces the size of the problem. Moreover, the linear dependencies are likely to introduce numerical
problems in the optimization phase.
It is best practise to build models without linear dependencies. If the linear dependencies are
removed at the modeling stage, the linear dependency check can safely be disabled by setting the
parameter MSK IPAR PRESOLVE LINDEP USE to MSK OFF.
10.1. HOW AN OPTIMIZER WORKS 57
10.1.2 Dualizer
All linear, conic, and convex optimization problems have an equivalent dual problem associated with
them. MOSEK has built-in heuristics to determine if it is most ecient to solve the primal or dual
problem. The form (primal or dual) solved is displayed in the MOSEK log. Should the internal
heuristics not choose the most ecient form of the problem it may be worthwhile to set the dualizer
manually by setting the parameters:
MSK IPAR INTPNT SOLVE FORM: In case of the interior-point optimizer.
MSK IPAR SIM SOLVE FORM: In case of the simplex optimizer.
Note that currently only linear problems may be dualized.
10.1.3 Scaling
Problems containing data with large and/or small coecients, say 1.0e +9 or 1.0e 7, are often hard
to solve. Signicant digits may be truncated in calculations with nite precision, which can result in
the optimizer relying on inaccurate calculations. Since computers work in nite precision, extreme
coecients should be avoided. In general, data around the same order of magnitude is preferred,
and we will refer to a problem, satisfying this loose property, as being well-scaled. If the problem is
not well scaled, MOSEK will try to scale (multiply) constraints and variables by suitable constants.
MOSEK solves the scaled problem to improve the numerical properties.
The scaling process is transparent, i.e. the solution to the original problem is reported. It is
important to be aware that the optimizer terminates when the termination criterion is met on the
scaled problem, therefore signicant primal or dual infeasibilities may occur after unscaling for badly
scaled problems. The best solution to this problem is to reformulate it, making it better scaled.
By default MOSEK heuristically chooses a suitable scaling. The scaling for interior-point and
simplex optimizers can be controlled with the parameters
MSK IPAR INTPNT SCALING and MSK IPAR SIM SCALING
respectively.
10.1.4 Using multiple CPUs
The interior-point optimizers in MOSEK have been parallelized. This means that if you solve linear,
quadratic, conic, or general convex optimization problem using the interior-point optimizer, you can
take advantage of multiple CPUs.
By default MOSEK uses one thread to solve the problem, but the number of threads (and thereby
CPUs) employed can be changed by setting the parameter MSK IPAR INTPNT NUM THREADS This should
never exceed the number of CPUs on the machine.
The speed-up obtained when using multiple CPUs is highly problem and hardware dependent, and
consequently, it is advisable to compare single threaded and multi threaded performance for the given
problem type to determine the optimal settings.
For small problems, using multiple threads will probably not be worthwhile.
58 CHAPTER 10. THE OPTIMIZERS FOR CONTINUOUS PROBLEMS
10.2 Linear optimization
10.2.1 Optimizer selection
Two dierent types of optimizers are available for linear problems: The default is an interior-point
method, and the alternatives are simplex methods. The optimizer can be selected using the parameter
MSK IPAR OPTIMIZER.
10.2.2 The interior-point optimizer
The purpose of this section is to provide information about the algorithm employed in MOSEK interior-
point optimizer.
In order to keep the discussion simple it is assumed that MOSEK solves linear optimization prob-
lems on standard form
minimize c
T
x
subject to Ax = b,
x 0.
(10.1)
This is in fact what happens inside MOSEK; for eciency reasons MOSEK converts the problem to
standard form before solving, then convert it back to the input form when reporting the solution.
Since it is not known beforehand whether problem (10.1) has an optimal solution, is primal infeasible
or is dual infeasible, the optimization algorithm must deal with all three situations. This is the reason
that MOSEK solves the so-called homogeneous model
Ax b = 0,
A
T
y + s c = 0,
c
T
x + b
T
y = 0,
x, s, , 0,
(10.2)
where y and s correspond to the dual variables in (10.1), and and are two additional scalar variables.
Note that the homogeneous model (10.2) always has solution since
(x, y, s, , ) = (0, 0, 0, 0, 0)
is a solution, although not a very interesting one.
Any solution
(x
, y
, s
)
to the homogeneous model (10.2) satises
x
j
s
j
= 0 and
= 0.
Moreover, there is always a solution that has the property
> 0.
First, assume that
= b,
A
T y
+
s
= c,
c
T x
+ b
T y
= 0,
x
, s
0.
(10.3)
10.2. LINEAR OPTIMIZATION 59
This shows that
x
,
s
,
y
,
s
_
is a primal-dual optimal solution.
On other hand, if
> 0 then
Ax
= 0,
A
T
y
+ s
= 0,
c
T
x
+ b
T
y
,
x
, s
0.
(10.4)
This implies that at least one of
c
T
x
> 0 (10.5)
or
b
T
y
> 0 (10.6)
is satised. If (10.5) is satised then x
k
b
_
_
_
p
(1 +|b|),
_
_
_A
T x
k
k
+
s
k
k
c
_
_
_
d
(1 +|c|), and
min
_
(x
k
)
T
s
k
+
k
k
(
k
)
2
,
c
T
x
k
k
b
T
y
k
_
g
max
_
1,
c
T
x
k
_
,
(10.7)
the interior-point optimizer is terminated and
(x
k
, y
k
, s
k
)
k
is reported as the primal-dual optimal solution. The interpretation of (10.7) is that the optimizer is
terminated if
60 CHAPTER 10. THE OPTIMIZERS FOR CONTINUOUS PROBLEMS
Tolerance Parameter name
p
MSK DPAR INTPNT TOL PFEAS
d
MSK DPAR INTPNT TOL DFEAS
g
MSK DPAR INTPNT TOL REL GAP
i
MSK DPAR INTPNT TOL INFEAS
Table 10.1: Parameters employed in termination criterion.
x
k
k
is approximately primal feasible,
_
y
k
k
,
s
k
k
_
is approximately dual feasible, and
the duality gap is almost zero.
On the other hand, if the trial solution satises
i
c
T
x
k
>
|c|
max(|b| , 1)
_
_
Ax
k
_
_
(10.8)
then the problem is declared dual infeasible and x
k
is reported as a certicate of dual infeasibility.
The motivation for this stopping criterion is as follows: First assume that
_
_
Ax
k
_
_
= 0; then x
k
is an
exact certicate of dual infeasibility. Next assume that this is not the case, i.e.
_
_
Ax
k
_
_
> 0,
and dene
x :=
i
max(1, |b|)x
k
|Ax
k
| |c|
.
It is easy to verify that
|A x| =
i
and c
T
x > 1,
which shows x is an approximate certicate dual infeasibility where
i
controls the quality of the
approximation. A smaller value means a better approximation.
Finally, if
i
b
T
y
k
|b|
max(1, |c|)
_
_
A
T
y
k
+ s
k
_
_
(10.9)
then y
k
is reported as a certicate of primal infeasibility.
It is possible to adjust the tolerances
p
,
d
,
g
and
i
using parameters; see table 10.1 for details.
The default values of the termination tolerances are chosen such that for a majority of problems
appearing in practice it is not possible to achieve much better accuracy. Therefore, tightening the
tolerances usually is not worthwhile. However, an inspection of (10.7) reveals that quality of the
solution is dependent on |b| and |c|; the smaller the norms are, the better the solution accuracy.
The interior-point method as implemented by MOSEK will converge toward optimality and primal
and dual feasibility at the same rate [9]. This means that if the optimizer is stopped prematurely then
it is very unlikely that either the primal or dual solution is feasible. Another consequence is that in
most cases all the tolerances,
p
,
d
and
g
, has to be relaxed together to achieve an eect.
10.2. LINEAR OPTIMIZATION 61
The basis identication discussed in section 10.2.2.2 requires an optimal solution to work well;
hence basis identication should turned o if the termination criterion is relaxed.
To conclude the discussion in this section, relaxing the termination criterion is usually is not
worthwhile.
10.2.2.2 Basis identication
An interior-point optimizer does not return an optimal basic solution unless the problem has a
unique primal and dual optimal solution. Therefore, the interior-point optimizer has an optional
post-processing step that computes an optimal basic solution starting from the optimal interior-point
solution. More information about the basis identication procedure may be found in [6].
Please note that a basic solution is often more accurate than an interior-point solution.
By default MOSEK performs a basis identication. However, if a basic solution is not needed, the
basis identication procedure can be turned o. The parameters
MSK IPAR INTPNT BASIS,
MSK IPAR BI IGNORE MAX ITER, and
MSK IPAR BI IGNORE NUM ERROR
controls when basis identication is performed.
10.2.2.3 The interior-point log
Below is a typical log output from the interior-point optimizer presented:
Optimizer - threads : 1
Optimizer - solved problem : the dual
Optimizer - constraints : 2 variables : 6
Factor - setup time : 0.04 order time : 0.00
Factor - GP order used : no GP order time : 0.00
Factor - nonzeros before factor : 3 after factor : 3
Factor - offending columns : 0 flops : 1.70e+001
ITE PFEAS DFEAS KAP/TAU POBJ DOBJ MU TIME
0 2.0e+002 2.9e+001 2.0e+002 -0.000000000e+000 -1.204741644e+003 2.0e+002 0.44
1 2.2e+001 3.1e+000 7.3e+002 -5.885951891e+003 -5.856764353e+003 2.2e+001 0.57
2 3.8e+000 5.4e-001 9.7e+001 -7.405187479e+003 -7.413054916e+003 3.8e+000 0.58
3 4.0e-002 5.7e-003 2.6e-001 -7.664507945e+003 -7.665313396e+003 4.0e-002 0.58
4 4.2e-006 6.0e-007 2.7e-005 -7.667999629e+003 -7.667999714e+003 4.2e-006 0.59
5 4.2e-010 6.0e-011 2.7e-009 -7.667999994e+003 -7.667999994e+003 4.2e-010 0.59
The rst line displays the number of threads used by the optimizer and second line tells that the
optimizer choose to solve the dual problem rather the primal problem. The next line displays the
problem dimensions as seen by the optimizer, and the Factor... lines show various statistics. This
is followed by the iteration log.
Using the same notation as in section 10.2.2 the columns of the iteration log has the following
meaning:
ITE: Iteration index.
62 CHAPTER 10. THE OPTIMIZERS FOR CONTINUOUS PROBLEMS
PFEAS:
_
_
Ax
k
b
k
_
_
. The numbers in this column should converge monotonically towards to
zero.
DFEAS:
_
_
A
T
y
k
+ s
k
c
k
_
_
. The numbers in this column should converge monotonically toward
to zero.
KAP/TAU:
k
/
k
. If the numbers in this column converge toward zero then the problem has an
optimal solution. Otherwise if the numbers converge towards innity, the problem is primal
or/and dual infeasible.
POBJ: c
T
x
k
/
k
. An estimate for the primal objective value.
DOBJ: b
T
y
k
/
k
. An estimate for the dual objective value.
MU:
(x
k
)
T
s
k
+
k
k
n+1
. The numbers in this column should always converge monotonically to zero.
TIME: Time spend since the optimization started.
10.2.3 The simplex based optimizer
An alternative to the interior-point optimizer is the simplex optimizer.
The simplex optimizer uses a dierent method that allows exploiting an initial guess for the optimal
solution to reduce the solution time. Depending on the problem it may be faster or slower to use an
initial guess; see section 10.2.4 for a discussion.
MOSEK provides both a primal and a dual variant of the simplex optimizer we will return to
this later.
10.2.3.1 Simplex termination criterion
The simplex optimizer terminates when it nds an optimal basic solution or an infeasibility certicate.
A basic solution is optimal when it is primal and dual feasible; see (9.1) and (9.2) for a denition
of the primal and dual problem. Due the fact that to computations are performed in nite pre-
cision MOSEK allows violation of primal and dual feasibility within certain tolerances. The user
can control the allowed primal and dual infeasibility with the parameters MSK DPAR BASIS TOL X and
MSK DPAR BASIS TOL S.
10.2.3.2 Starting from an existing solution
When using the simplex optimizer it may be possible to reuse an existing solution and thereby reduce
the solution time signicantly. When a simplex optimizer starts from an existing solution it is said to
perform a hot-start. If the user is solving a sequence of optimization problems by solving the problem,
making modications, and solving again, MOSEK will hot-start automatically.
Setting the parameter MSK IPAR OPTIMIZER to MSK OPTIMIZER FREE SIMPLEX instructs MOSEK to
select automatically between the primal and the dual simplex optimizers. Hence, MOSEK tries to
choose the best optimizer for the given problem and the available solution.
By default MOSEK uses presolve when performing a hot-start. If the optimizer only needs very
few iterations to nd the optimal solution it may be better to turn o the presolve.
10.2. LINEAR OPTIMIZATION 63
10.2.3.3 Numerical diculties in the simplex optimizers
Though MOSEK is designed to minimize numerical instability, completely avoiding it is impossible
when working in nite precision. MOSEK counts a numerical unexpected behavior event inside the
optimizer as a set-back. The user can dene how many set-backs the optimizer accepts; if that number
is exceeded, the optimization will be aborted. Set-backs are implemented to avoid long sequences
where the optimizer tries to recover from an unstable situation.
Set-backs are, for example, repeated singularities when factorizing the basis matrix, repeated loss
of feasibility, degeneracy problems (no progress in objective) and other events indicating numerical
diculties. If the simplex optimizer encounters a lot of set-backs the problem is usually badly scaled;
in such a situation try to reformulate into a better scaled problem. Then, if a lot of set-backs still
occur, trying one or more of the following suggestions may be worthwhile:
Raise tolerances for allowed primal or dual feasibility: Hence, increase the value of
MSK DPAR BASIS TOL X, and
MSK DPAR BASIS TOL S.
Raise or lower pivot tolerance: Change the MSK DPAR SIMPLEX ABS TOL PIV parameter.
Switch optimizer: Try another optimizer.
Switch o crash: Set both MSK IPAR SIM PRIMAL CRASH and MSK IPAR SIM DUAL CRASH to 0.
Experiment with other pricing strategies: Try dierent values for the parameters
MSK IPAR SIM PRIMAL SELECTION and
MSK IPAR SIM DUAL SELECTION.
If you are using hot-starts, in rare cases switching o this feature may improve stability. This is
controlled by the MSK IPAR SIM HOTSTART parameter.
Increase maximum set-backs allowed controlled by MSK IPAR SIM MAX NUM SETBACKS.
If the problem repeatedly becomes infeasible try switching o the special degeneracy handling.
See the parameter MSK IPAR SIM DEGEN for details.
10.2.4 The interior-point or the simplex optimizer?
Given a linear optimization problem, which optimizer is the best: The primal simplex, the dual simplex
or the interior-point optimizer?
It is impossible to provide a general answer to this question, however, the interior-point optimizer
behaves more predictably it tends to use between 20 and 100 iterations, almost independently of
problem size but cannot perform hot-start, while simplex can take advantage of an initial solution,
but is less predictable for cold-start. The interior-point optimizer is used by default.
64 CHAPTER 10. THE OPTIMIZERS FOR CONTINUOUS PROBLEMS
10.2.5 The primal or the dual simplex variant?
MOSEK provides both a primal and a dual simplex optimizer. Predicting which simplex optimizer
is faster is impossible, however, in recent years the dual optimizer has seen several algorithmic and
computational improvements, which, in our experience, makes it faster on average than the primal
simplex optimizer. Still, it depends much on the problem structure and size.
Setting the MSK IPAR OPTIMIZER parameter to MSK OPTIMIZER FREE SIMPLEX instructs MOSEK to
choose which simplex optimizer to use automatically.
To summarize, if you want to know which optimizer is faster for a given problem type, you should
try all the optimizers.
10.3 Linear network optimization
10.3.1 Network ow problems
MOSEK includes a network simplex solver which, on avarage, solves network problems 10 to 100 times
faster than the standard simplex optimizers.
To use the network simplex optimizer, do the following:
Input the network ow problem as an ordinary linear optimization problem.
Set the parameters
MSK IPAR SIM NETWORK DETECT to 0, and
MSK IPAR OPTIMIZER to MSK OPTIMIZER FREE SIMPLEX.
MOSEK will automatically detect the network structure and apply the specialized simplex optimizer.
10.3.2 Embedded network problems
Often problems contains both large parts with network structure and some non-network constraints
or variables such problems are said to have embedded network structure.
If the procedure described in section 10.3.1 is applied, MOSEK will attemt to exploit this structure
to speed up the optimization.
This is done heuristically by detecting the largest network embedded in the problem, solving this
subproblem using the network simplex optimizer, and using the solution to hot-start a normal simplex
optimizer.
The MSK IPAR SIM NETWORK DETECT parameter denes how large a percentage of the problem should
be a network before the specialized solver is applied. In general, it is recommended to use the network
optimizer only on problems containing a substantial embedded network.
If MOSEK only nds limited network structure in a problem, consider trying to switch o presolve
MSK IPAR PRESOLVE USE and scaling MSK IPAR SIM SCALING, since in rare cases it might disturb the
network heuristic.
10.4. CONIC OPTIMIZATION 65
Parameter name Purpose
MSK DPAR INTPNT CO TOL PFEAS Controls primal feasibility
MSK DPAR INTPNT CO TOL DFEAS Controls dual feasibility
MSK DPAR INTPNT CO TOL REL GAP Controls relative gap
MSK DPAR INTPNT TOL INFEAS Controls when the problem is declared infeasible
MSK DPAR INTPNT CO TOL MU RED Controls when the complementarity is reduced enough
Table 10.2: Parameters employed in termination criterion.
10.4 Conic optimization
10.4.1 The interior-point optimizer
For conic optimization problems only an interior-point type optimizer is available. The interior-point
optimizer is an implementation of the so-called homogeneous and self-dual algorithm. For a detailed
description of the algorithm, please see [5].
10.4.1.1 Interior-point termination criteria
The parameters controlling when the conic interior-point optimizer terminates are shown in Table 10.2.
10.5 Nonlinear convex optimization
10.5.1 The interior-point optimizer
For quadratic, quadratically constrained, and general convex optimization problems an interior-point
type optimizer is available. The interior-point optimizer is an implementation of the homogeneous and
self-dual algorithm. For a detailed description of the algorithm, please see [7, 8].
10.5.1.1 The convexity requirement
Continuous nonlinear problems are required to be convex. For quadratic problems MOSEK test this
requirement before optimizing. Specifying a non-convex problem results in an error message.
The following parameters are available to control the convexity check:
MSK IPAR CHECK CONVEXITY: Turn convexity check on/o.
MSK DPAR CHECK CONVEXITY REL TOL: Tolerance for convexity check.
MSK IPAR LOG CHECK CONVEXITY: Turn on more log information for debugging.
10.5.1.2 The dierentiabilty requirement
The nonlinear optimizer in MOSEK requires both rst order and second order derivatives. This of
course implies care should be taken when solving problems involving non-dierentiable functions.
For instance, the function
f(x) = x
2
66 CHAPTER 10. THE OPTIMIZERS FOR CONTINUOUS PROBLEMS
Parameter name Purpose
MSK DPAR INTPNT NL TOL PFEAS Controls primal feasibility
MSK DPAR INTPNT NL TOL DFEAS Controls dual feasibility
MSK DPAR INTPNT NL TOL REL GAP Controls relative gap
MSK DPAR INTPNT TOL INFEAS Controls when the problem is declared infeasible
MSK DPAR INTPNT NL TOL MU RED Controls when the complementarity is reduced enough
Table 10.3: Parameters employed in termination criteria.
is dierentiable everywhere whereas the function
f(x) =
x
is only direntiable for x > 0. In order to make sure that MOSEK evaulates the functions at points
where they are dierentiable, the function domains must be dened by setting appropriate variable
bounds.
In general, if a variable is not ranged MOSEK will only evaluate that variable at points strictly
within the bounds. Hence, imposing the bound
x 0
in the case of
x is sucient to guarantee that the function will only be evaluated in points where it
is dierentiable.
However, if a function is dierentiable on closed a range, specifying the variable bounds is not
sucient. Consider the function
f(x) =
1
x
+
1
1 x
. (10.10)
In this case the bounds
0 x 1
will not guarantee that MOSEK only evalues the function for x between 0 and 1. To force MOSEK to
strictly satisfy both bounds on ranged variables set the parameter MSK IPAR INTPNT STARTING POINT
to MSK STARTING POINT SATISFY BOUNDS.
For eciency reasons it may be better to reformulate the problem than to force MOSEK to observe
ranged bounds strictly. For instance, (10.10) can be reformulated as follows
f(x) =
1
x
+
1
y
0 = 1 x y
0 x
0 y.
10.5.1.3 Interior-point termination criteria
The parameters controlling when the general convex interior-point optimizer terminates are shown in
Table 10.3.
10.6. SOLVING PROBLEMS IN PARALLEL 67
10.6 Solving problems in parallel
If a computer has multiple CPUs, or has a CPU with multiple cores, it is possible for MOSEK to take
advantage of this to speed up solution times.
10.6.1 Thread safety
The MOSEK API is thread-safe provided that a task is only modied or accessed from one thread at
any given time accessing two separate tasks from two separate threads at the same time is safe.
Sharing an environment between threads is safe.
10.6.2 The parallelized interior-point optimizer
The interior-point optimizer is capable of using multiple CPUs or cores. This implies that whenever
the MOSEK interior-point optimizer solves an optimization problem, it will try to divide the work so
that each CPU gets a share of the work. The user decides how many CPUs MOSEK should exploit.
It is not always possible to divide the work equally, and often parts of the computations and the
coordination of the work is processed sequentially, even if several CPUs are present. Therefore, the
speed-up obtained when using multiple CPUs is highly problem dependent. However, as a rule of
thumb, if the problem solves very quickly, i.e. in less than 60 seconds, it is not advantageous to use
the parallel option.
The MSK IPAR INTPNT NUM THREADS parameter sets the number of threads (and therefore the num-
ber of CPUs) that the interior point optimizer will use.
10.6.3 The concurrent optimizer
An alternative to the parallel interior-point optimizer is the concurrent optimizer. The idea of the
concurrent optimizer is to run multiple optimizers on the same problem concurrently, for instance,
it allows you to apply the interior-point and the dual simplex optimizers to a linear optimization
problem concurrently. The concurrent optimizer terminates when the rst of the applied optimizers
has terminated successfully, and it reports the solution of the fastest optimizer. In that way a new
optimizer has been created which essentially performs as the fastest of the interior-point and the
dual simplex optimizers.Hence, the concurrent optimizer is the best one to use if there are multiple
optimizers available in MOSEK for the problem and you cannot say beforehand which one will be
faster.
Note in particular that any solution present in the task will also be used for hot-starting the simplex
algorithms. One possible scenario would therefore be running a hot-start dual simplex in parallel with
interior point, taking advantage of both the stability of the interior-point method and the ability of
the simplex method to use an initial solution.
By setting the
MSK_IPAR_OPTIMIZER
parameter to
MSK_OPTIMIZER_CONCURRENT
the concurrent optimizer chosen.
The number of optimizers used in parallel is determined by the
68 CHAPTER 10. THE OPTIMIZERS FOR CONTINUOUS PROBLEMS
Optimizer Associated Default
parameter priority
MSK OPTIMIZER INTPNT MSK IPAR CONCURRENT PRIORITY INTPNT 4
MSK OPTIMIZER FREE SIMPLEX MSK IPAR CONCURRENT PRIORITY FREE SIMPLEX 3
MSK OPTIMIZER PRIMAL SIMPLEX MSK IPAR CONCURRENT PRIORITY PRIMAL SIMPLEX 2
MSK OPTIMIZER DUAL SIMPLEX MSK IPAR CONCURRENT PRIORITY DUAL SIMPLEX 1
Table 10.4: Default priorities for optimizer selection in concurrent optimization.
MSK_IPAR_CONCURRENT_NUM_OPTIMIZERS.
parameter. Moreover, the optimizers are selected according to a preassigned priority with optimizers
having the highest priority being selected rst. The default priority for each optimizer is shown in
Table 10.6.3. For example, setting the MSK IPAR CONCURRENT NUM OPTIMIZERS parameter to 2 tells the
concurrent optimizer to the apply the two optimizers with highest priorities: In the default case that
means the interior-point optimizer and one of the simplex optimizers.
10.6.3.1 Concurrent optimization from the command line
The command line
mosek afiro.mps -d MSK_IPAR_OPTIMIZER MSK_OPTIMIZER_CONCURRENT \
-d MSK_IPAR_CONCURRENT_NUM_OPTIMIZERS 2
produces the following (edited) output:
...
Number of concurrent optimizers : 2
Optimizer selected for thread number 0 : interior-point (threads = 1)
Optimizer selected for thread number 1 : free simplex
Total number of threads required : 2
...
Thread number 1 (free simplex) terminated first.
...
Concurrent optimizer terminated. CPU Time: 0.03. Real Time: 0.00.
As indicated in the log information, the interior-point and the free simplex optimizers are employed
concurrently. However, only the output from the optimizer having the highest priority is printed to
the screen. In the example this is the interior-point optimizer.
The line
Total number of threads required : 2
10.7. UNDERSTANDING SOLUTION QUALITY 69
indicates the number of threads used. If the concurrent optimizer should be eective, this should be
lower than the number of CPUs.
In the above example the simplex optimizer nishes rst as indicated in the log information.
10.7 Understanding solution quality
MOSEK will, in general, not produce an exact optimal solution; for eciency reasons computations are
performed in nite precision. This means that it is important to evaluate the quality of the reported
solution. To evaluate the solution quality inspect the following properties:
The solution status reported by MOSEK.
Primal feasibility: How much the solution violates the original constraints of the problem.
Dual feasibility: How much the dual solution violates the constraints of the dual problem.
Duality gap: The dierence between the primal and dual objective values.
Ideally, the primal and dual solutions should only violate the constraints of their respective problem
slightly and the primal and dual objective values should be close. This should be evaluated in the
context of the problem: How good is the data precision in the problem, and how exact a solution is
required.
10.7.1 The solution summary
The solution summary is a small display generated by MOSEK that makes it easy to check the quality
of the solution.
10.7.1.1 The optimal case
The solution summary has the format
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal - objective: 5.5018458883e+03 eq. infeas.: 1.20e-12 max bound infeas.: 2.31e-14
Dual - objective: 5.5018458883e+03 eq. infeas.: 1.15e-14 max bound infeas.: 7.11e-15
i.e. it shows status information, objective values and quality measures for the primal and dual solutions.
Assumeing that we are solving a linear optimization problem and referring to the problems (9.1)
and (9.2), the interpretation of the solution summary is as follows:
Problem status: The status of the problem.
Solution status: The status of the solution.
Primal objective: The primal objective value.
Primal eq. infeas: |Ax
x
x
c
|
.
Primal max bound infeas.: max(l
c
x
c
; x
c
u
c
; l
x
x
x
; x
x
u
x
; 0).
70 CHAPTER 10. THE OPTIMIZERS FOR CONTINUOUS PROBLEMS
Dual objective: The dual objective value.
Dual eq. infeas:
_
_
y + s
c
l
s
c
u
; A
T
y + s
x
l
s
x
u
c
_
_
.
Dual max bound infeas.: max(s
c
l
; s
c
u
; s
x
l
; s
x
u
; 0).
In the solution summary above the solution is classied as OPTIMAL, meaning that the solution
should be a good approximation to the true optimal solution. This seems very reasonable since the
primal and dual solutions only violate their respective constraints slightly. Moreover, the duality gap
is small, i.e. the primal and dual objective values are almost identical.
10.7.1.2 The primal infeasible case
For an infeasible problem the solution summary might look like this:
Problem status : PRIMAL_INFEASIBLE
Solution status : PRIMAL_INFEASIBLE_CER
Primal - objective: 0.0000000000e+00 eq. infeas.: 0.00e+00 max bound infeas.: 0.00e+00
Dual - objective: 1.0000000000e+02 eq. infeas.: 0.00e+00 max bound infeas.: 0.00e+00
It is known that if the problem is primal infeasible then an infeasibility certicate exists, which is
a solution to the problem (9.3) having a positive objective value. Note that the primal solution plays
no role and only the dual solution is used to specify the certicate.
Therefore, in the primal infeasible case the solution summery should report how good the dual
solution is to the problem (9.3). The interpretation of the solution summary is as follows:
Problem status: The status of the problem.
Solution status: The status of the solution.
Primal objective: Should be ignored.
Primal eq. infeas: Should be ignored.
Primal max bound infeas.: Should be ignored.
Dual objective: (l
c
)
T
s
c
l
(u
c
)
T
s
c
u
+ (l
x
)
T
s
x
l
(u
x
)
T
s
x
u
.
Dual eq. infeas:
_
_
y + s
c
l
s
c
u
; A
T
y + s
x
l
s
x
u
0
_
_
.
Dual max bound infeas.: max(s
c
l
; s
c
u
; s
x
l
; s
x
u
).
Please note that
any information about the primal solution should be ignored.
the dual objective value should be strictly positive if primal problem is minimization problem.
Otherwise it should be strictly negative.
the bigger the ratio
(l
c
)
T
s
c
l
(u
c
)
T
s
c
u
+ (l
x
)
T
s
x
l
(u
x
)
T
s
x
u
max(|y + s
c
l
s
c
u
; A
T
y + s
x
l
s
x
u
0|
, max(s
c
l
; s
c
u
; s
x
l
; s
x
u
))
is, the better the certicate is. The reason is that a certicate is a ray, and hence only the
direction is important. Therefore, in principle, the certicate should be normalized before using
it.
10.7. UNDERSTANDING SOLUTION QUALITY 71
Please see Section 12.2 for more information about certicates of infeasibility.
72 CHAPTER 10. THE OPTIMIZERS FOR CONTINUOUS PROBLEMS
Chapter 11
The optimizer for mixed integer
problems
A problem is a mixed-integer optimization problem when one or more of the variables are constrained
to be integers. The integer optimizer available in MOSEK can solve integer optimization problems
involving
linear,
quadratic and
conic
constraints. However, a problem is not allowed to have both conic constraints and quadratic objective
or constraints.
Readers unfamiliar with integer optimization are strongly recommended to consult some relevant
literature, e.g. the book [23] by Wolsey is a good introduction to integer optimization.
11.1 Some notation
In general, an integer optimization problem has the form
z
= minimize c
T
x
subject to l
c
Ax u
c
,
l
x
Ax u
x
,
x
j
Z, j ,
(11.1)
where is an index set specifying which variables are integer-constrained. Frequently we talk about
the continuous relaxation of an integer optimization problem dened as
z = minimize c
T
x
subject to l
c
Ax u
c
,
l
x
Ax u
x
(11.2)
73
74 CHAPTER 11. THE OPTIMIZER FOR MIXED INTEGER PROBLEMS
i.e. we ignore the constraint
x
j
Z, j .
Moreover, let x be any feasible solution to (11.1) and dene
z := c
T
x.
It should be obvious that
z z
z
holds. This is an important observation since if we assume that it is not possible to solve the mixed-
integer optimization problem within a reasonable time frame, but that a feasible solution can be found,
then the natural question is: How far is the obtained solution from the optimal solution? The answer
is that no feasible solution can have an objective value smaller than z, which implies that the obtained
solution is no further away from the optimum than z z.
11.2 An important fact about integer optimization problems
It is important to understand that in a worst-case scenario, the time required to solve integer optimiza-
tion problems grows exponentially with the size of the problem. For instance, assume that a problem
contains n binary variables, then the time required to solve the problem in the worst case may be
proportional to 2
n
. It is a simple exercise to verify that 2
n
is huge even for moderate values of n.
In practice this implies that the focus should be on computing a near optimal solution quickly
rather than at locating an optimal solution.
11.3 How the integer optimizer works
The process of solving an integer optimization problem can be split in three phases:
Presolve: In this phase the optimizer tries to reduce the size of the problem using preprocessing
techniques. Moreover, it strengthens the continuous relaxation, if possible.
Heuristic: Using heuristics the optimizer tries to guess a good feasible solution.
Optimization: The optimal solution is located using a variant of the branch-and-cut method.
In some cases the integer optimizer may locate an optimal solution in the preprocessing stage or
conclude that the problem is infeasible. Therefore, the heuristic and optimization stages may never be
performed.
11.3.1 Presolve
In the preprocessing stage redundant variables and constraints are removed. The presolve stage can
be turned o using the MSK IPAR MIO PRESOLVE USE parameter.
11.4. TERMINATION CRITERION 75
11.3.2 Heuristic
Initially, the integer optimizer tries to guess a good feasible solution using dierent heuristics:
First a very simple rounding heuristic is employed.
Next, if deemed worthwhile, the feasibility pump heuristic is used.
Finally, if the two previous stages did not produce a good initial solution, more sophisticated
heuristics are used.
The following parameters can be used to control the eort made by the integer optimizer to nd
an initial feasible solution.
MSK IPAR MIO HEURISTIC LEVEL: Controls how sophisticated and computationally expensive a
heuristic to employ.
MSK DPAR MIO HEURISTIC TIME: The minimum amount of time to spend in the heuristic search.
MSK IPAR MIO FEASPUMP LEVEL: Controls how aggressively the feasibility pump heuristic is used.
11.3.3 The optimization phase
This phase solves the problem using the branch and cut algorithm.
11.4 Termination criterion
In general, it is impossible to nd an exact feasible and optimal solution to an integer optimization
problem in a reasonable amount of time, though in many practical cases it may be possible. There-
fore, the integer optimizer employs a relaxed feasibility and optimality criterion to determine when a
satisfactory solution is located.
A candidate solution, i.e. a solution to (11.2), is said to be an integer feasible solution if the
criterion
min([x
j
[ x
j
|, x
j
| [x
j
[) max(
1
,
2
[x
j
[) j
is satised. Hence, such a solution is dened as a feasible solution to (11.1).
Whenever the integer optimizer locates an integer feasible solution it will check if the criterion
z z max(
3
,
4
max(1, [z[))
is satised. If this is the case, the integer optimizer terminates and reports the integer feasible solution
as an optimal solution. Please note that z is a valid lower bound determined by the integer optimizer
during the solution process, i.e.
z z
.
The lower bound z normally increases during the solution process.
The tolerances can are specied using parameters see Table 11.1. If an optimal solution cannot
be located within a reasonable time, it may be advantageous to employ a relaxed termination criterion
after some time. Whenever the integer optimizer locates an integer feasible solution and has spent at
76 CHAPTER 11. THE OPTIMIZER FOR MIXED INTEGER PROBLEMS
Tolerance Parameter name
1
MSK DPAR MIO TOL ABS RELAX INT
2
MSK DPAR MIO TOL REL RELAX INT
3
MSK DPAR MIO TOL ABS GAP
4
MSK DPAR MIO TOL REL GAP
5
MSK DPAR MIO NEAR TOL ABS GAP
6
MSK DPAR MIO NEAR TOL REL GAP
Table 11.1: Integer optimizer tolerances.
Parameter name Delayed Explanation
MSK IPAR MIO MAX NUM BRANCHES Yes Maximum number of branches allowed.
MSK IPAR MIO MAX NUM RELAXS Yes Maximum number of relaxations allowed.
MSK IPAR MIO MAX NUM SOLUTIONS Yes Maximum number of feasible integer solutions allowed.
Table 11.2: Parameters aecting the termination of the integer optimizer.
least the number of seconds dened by the MSK DPAR MIO DISABLE TERM TIME parameter on solving
the problem, it will check whether the criterion
z z max(
5
,
6
max(1, [z[))
is satised. If it is satised, the optimizer will report that the candidate solution is near optimal and
then terminate. All tolerances can be adjusted using suitable parameters see Table 11.1. In Table
11.2 some other parameters aecting the integer optimizer termination criterion are shown. Please
note that if the eect of a parameter is delayed, the associated termination criterion is applied only
after some time, specied by the MSK DPAR MIO DISABLE TERM TIME parameter.
11.5 How to speed up the solution process
As mentioned previously, in many cases it is not possible to nd an optimal solution to an integer
optimization problem in a reasonable amount of time. Some suggestions to reduce the solution time
are:
Relax the termination criterion: In case the run time is not acceptable, the rst thing to do is
to relax the termination criterion see Section 11.4 for details.
Specify a good initial solution: In many cases a good feasible solution is either known or easily
computed using problem specic knowledge. If a good feasible solution is known, it is usually
worthwhile to use this as a starting point for the integer optimizer.
Improve the formulation: A mixed-integer optimization problem may be impossible to solve
in one form and quite easy in another form. However, it is beyond the scope of this manual
to discuss good formulations for mixed-integer problems. For discussions on this topic see for
example [23].
11.6. UNDERSTANDING SOLUTION QUALITY 77
11.6 Understanding solution quality
To determine the quality of the solution one should check the following:
The solution status key returned by MOSEK.
The optimality gap: A messure for how much the located solution can deviate from the optimal
solution to the problem.
Feasibility. How much the solution violates the constraints of the problem.
The optimality gap is a measure for how close the solution is to the optimal solution. The optimality
gap is given by
= [(objective value of feasible solution) (objective bound)[.
The objective value of the solution is guarentted to be within of the optimal solution.
The optimality gap can be retrived through the solution item MSK DINF MIO OBJ ABS GAP. Often it
is more meaningful to look at the optimality gap normalized with the magnitude of the solution. The
relative optimality gap is available in MSK DINF MIO OBJ REL GAP.
11.6.1 Solutionsummary
After a call to the optimizer the solution summary might look like this:
Problem status : PRIMAL_FEASIBLE
Solution status : INTEGER_OPTIMAL
Primal - objective: 1.2015000000e+06 eq. infeas.: 0.00e+00 max bound infeas.: 0.00e+00
cone infeas.: 0.00e+00 integer infeas.: 0.00e+00
The second line contains the solution status key. This shows how MOSEK classied the solution.
In this case it is INTEGER OPTIMAL, meaning that the solution is considered to be optimal within the
selected tolerances.
The third line contains information relating to the solution. The rst number is the primal objective
function. The second and third number is the maximum infeasibility in the equality constraints and
bounds respectfully. The fourth and fth number is the maximum infeasibility in the conic and integral
contraints. All the numbers relating to the feasibility of the solution should be small for the solution
to be valid.
78 CHAPTER 11. THE OPTIMIZER FOR MIXED INTEGER PROBLEMS
Chapter 12
The analyzers
12.1 The problem analyzer
The problem analyzer prints a detailed survey of the models
linear constraints and objective
quadratic constraints
conic constraints
variables
In the initial stages of model formulation the problem analyzer may be used as a quick way of
verifying that the model has been built or imported correctly. In later stages it can help revealing
special structures within the model that may be used to tune the optimizers performance or to identify
the causes of numerical diculties.
The problem analyzer is run from the command line using the -anapro argument and produces
something similar to the following (this is the problemanalyzers survey of the aflow30a problem from
the MIPLIB 2003 collection, see Appendix J for more examples):
Analyzing the problem
Constraints Bounds Variables
upper bd: 421 ranged : all cont: 421
fixed : 58 bin : 421
-------------------------------------------------------------------------------
Objective, min cx
range: min |c|: 0.00000 min |c|>0: 11.0000 max |c|: 500.000
distrib: |c| vars
0 421
[11, 100) 150
[100, 500] 271
-------------------------------------------------------------------------------
79
80 CHAPTER 12. THE ANALYZERS
Constraint matrix A has
479 rows (constraints)
842 columns (variables)
2091 (0.518449%) nonzero entries (coefficients)
Row nonzeros, A_i
range: min A_i: 2 (0.23753%) max A_i: 34 (4.038%)
distrib: A_i rows rows% acc%
2 421 87.89 87.89
[8, 15] 20 4.18 92.07
[16, 31] 30 6.26 98.33
[32, 34] 8 1.67 100.00
Column nonzeros, A|j
range: min A|j: 2 (0.417537%) max A|j: 3 (0.626305%)
distrib: A|j cols cols% acc%
2 435 51.66 51.66
3 407 48.34 100.00
A nonzeros, A(ij)
range: min |A(ij)|: 1.00000 max |A(ij)|: 100.000
distrib: A(ij) coeffs
[1, 10) 1670
[10, 100] 421
-------------------------------------------------------------------------------
Constraint bounds, lb <= Ax <= ub
distrib: |b| lbs ubs
0 421
[1, 10] 58 58
Variable bounds, lb <= x <= ub
distrib: |b| lbs ubs
0 842
[1, 10) 421
[10, 100] 421
-------------------------------------------------------------------------------
The survey is divided into six dierent sections, each described below. To keep the presentation
short with focus on key elements the analyzer generally attempts to display information on issues
relevant for the current model only: E.g., if the model does not have any conic constraints (this is the
case in the example above) or any integer variables, those parts of the analysis will not appear.
12.1.1 General characteristics
The rst part of the survey consists of a brief summary of the models linear and quadratic constraints
(indexed by i) and variables (indexed by j). The summary is divided into three subsections:
Constraints
upper bd: The number of upper bounded constraints,
n1
j=0
a
ij
x
j
u
c
i
12.1. THE PROBLEM ANALYZER 81
lower bd: The number of lower bounded constraints, l
c
i
n1
j=0
a
ij
x
j
ranged : The number of ranged constraints, l
c
i
n1
j=0
a
ij
x
j
u
c
i
fixed : The number of xed constraints, l
c
i
=
n1
j=0
a
ij
x
j
= u
c
i
free : The number of free constraints
Bounds
upper bd: The number of upper bounded variables, x
j
u
x
j
lower bd: The number of lower bounded variables, l
x
k
x
j
ranged : The number of ranged variables, l
x
k
x
j
u
x
j
fixed : The number of xed variables, l
x
k
= x
j
= u
x
j
free : The number of free variables
Variables
cont: The number of continuous variables, x
j
R
bin : The number of binary variables, x
j
0, 1
int : The number of general integer variables, x
j
Z
Only constraints, bounds and domains actually in the model will be reported on, cf. appendix J; if
all entities in a section turn out to be of the same kind, the number will be replaced by all for brevity.
12.1.2 Objective
The second part of the survey focuses on (the linear part of) the objective, summarizing the opti-
mization sense and the coecients absolute value range and distribution. The number of 0 (zero)
coecients is singled out (if any such variables are in the problem).
The range is displayed using three terms:
min |c|: The minimum absolute value among all coeecients
min |c|>0: The minimum absolute value among the nonzero coecients
max |c|: The maximum absolute value among the coecients
If some of these extrema turn out to be equal, the display is shortened accordingly:
If min |c| is greater than zero, the min |c|>0 term is obsolete and will not be displayed
If only one or two dierent coecients occur this will be displayed using all and an explicit
listing of the coecients
82 CHAPTER 12. THE ANALYZERS
The absolute value distribution is displayed as a table summarizing the numbers by orders of magnitude
(with a ratio of 10). Again, the number of variables with a coecient of 0 (if any) is singled out. Each
line of the table is headed by an interval (half-open intervals including their lower bounds), and is
followed by the number of variables with their objective coecient in this interval. Intervals with no
elements are skipped.
12.1.3 Linear constraints
The third part of the survey displays information on the nonzero coecients of the linear constraint
matrix.
Following a brief summary of the matrix dimensions and the number of nonzero coecients in
total, three sections provide further details on how the nonzero coecients are distributed by row-wise
count (A_i), by column-wise count (A|j), and by absolute value (|A(ij)|). Each section is headed
by a brief display of the distributions range (min and max), and for the row/column-wise counts the
corresponding densities are displayed too (in parentheses).
The distribution tables single out three particularly interesting counts: zero, one, and two nonzeros
per row/column; the remaining row/column nonzeros are displayed by orders of magnitude (ratio 2).
For each interval the relative and accumulated relative counts are also displayed.
Note that constraints may have both linear and quadratic terms, but the empty rows and columns
reported in this part of the survey relate to the linear terms only. If empty rows and/or columns are
found in the linear constraint matrix, the problem is analyzed further in order to determine if the
corresponding constraints have any quadratic terms or the corresponding variables are used in conic
or quadratic constraints; cf. the last two examples of appendix J.
The distribution of the absolute values, |A(ij)|, is displayed just as for the objective coecients
described above.
12.1.4 Constraint and variable bounds
The fourth part of the survey displays distributions for the absolute values of the nite lower and upper
bounds for both constraints and variables. The number of bounds at 0 is singled out and, otherwise,
displayed by orders of magnitude (with a ratio of 10).
12.1.5 Quadratic constraints
The fth part of the survey displays distributions for the nonzero elements in the gradient of the
quadratic constraints, i.e. the nonzero row counts for the column vectors Qx. The table is similar to
the tables for the linear constraints nonzero row and column counts described in the surveys third
part.
Note: Quadratic constraints may also have a linear part, but that will be included in the linear
constraints survey; this means that if a problem has one or more pure quadratic constraints, part three
of the survey will report an equal number of linear constraint rows with 0 (zero) nonzeros, cf. the last
example in appendix J. Likewise, variables that appear in quadratic terms only will be reported as
empty columns (0 nonzeros) in the linear constraint report.
12.2. ANALYZING INFEASIBLE PROBLEMS 83
12.1.6 Conic constraints
The last part of the survey summarizes the models conic constraints. For each of the two types of
cones, quadratic and rotated quadratic, the total number of cones are reported, and the distribution
of the cones dimensions are displayed using intervals. Cone dimensions of 2, 3, and 4 are singled out.
12.2 Analyzing infeasible problems
When developing and implementing a new optimization model, the rst attempts will often be either
infeasible, due to specication of inconsistent constraints, or unbounded, if important constraints have
been left out.
In this chapter we will
go over an example demonstrating how to locate infeasible constraints using the MOSEK infea-
sibility report tool,
discuss in more general terms which properties that may cause infeasibilities, and
present the more formal theory of infeasible and unbounded problems.
12.2.1 Example: Primal infeasibility
A problem is said to be primal infeasible if no solution exists that satisfy all the constraints of the
problem.
As an example of a primal infeasible problem consider the problem of minimizing the cost of trans-
portation between a number of production plants and stores: Each plant produces a xed number of
goods, and each store has a xed demand that must be met. Supply, demand and cost of transportation
per unit are given in gure 12.1.
The problem represented in gure 12.1 is infeasible, since the total demand
2300 = 1100 + 200 + 500 + 500 (12.1)
exceeds the total supply
2200 = 200 + 1000 + 1000 (12.2)
If we denote the number of transported goods from plant i to store j by x
ij
, the problem can be
formulated as the LP:
minimize x
11
+ 2x
12
+ 5x
23
+ 2x
24
+ x
31
+ 2x
33
+ x
34
subject to x
11
+ x
12
200,
x
23
+ x
24
1000,
x
31
+ x
33
+ x
34
1000,
x
11
+ x
31
= 1100,
x
12
= 200,
x
23
+ x
33
= 500,
x
24
+ x
34
= 500,
x
ij
0.
(12.3)
Solving the problem (12.3) using MOSEK will result in a solution, a solution status and a problem
status. Among the log output from the execution of MOSEK on the above problem are the lines:
84 CHAPTER 12. THE ANALYZERS
Supply Demand
1
2
5
2
1
2
1
Plant 1
Plant 2
Plant 3
Store 4
Store 3
Store 2
Store 1
1000
1000
200
500
500
200
1100
Figure 12.1: Supply, demand and cost of transportation.
Basic solution
Problem status : PRIMAL_INFEASIBLE
Solution status : PRIMAL_INFEASIBLE_CER
The rst line indicates that the problem status is primal infeasible. The second line says that a
certicate of the infeasibility was found. The certicate is returned in place of the solution to the
problem.
12.2.2 Locating the cause of primal infeasibility
Usually a primal infeasible problem status is caused by a mistake in formulating the problem and
therefore the question arises: What is the cause of the infeasible status? When trying to answer this
question, it is often advantageous to follow these steps:
Remove the objective function. This does not change the infeasible status but simplies the
problem, eliminating any possibility of problems related to the objective function.
Consider whether your problem has some necessary conditions for feasibility and examine if these
are satised, e.g. total supply should be greater than or equal to total demand.
Verify that coecients and bounds are reasonably sized in your problem.
If the problem is still primal infeasible, some of the constraints must be relaxed or removed completely.
The MOSEK infeasibility report (Section 12.2.4) may assist you in nding the constraints causing the
infeasibility.
Possible ways of relaxing your problem include:
12.2. ANALYZING INFEASIBLE PROBLEMS 85
Increasing (decreasing) upper (lower) bounds on variables and constraints.
Removing suspected constraints from the problem.
Returning to the transportation example, we discover that removing the fth constraint
x
12
= 200 (12.4)
makes the problem feasible.
12.2.3 Locating the cause of dual infeasibility
A problem may also be dual infeasible. In this case the primal problem is often unbounded, mening
that feasbile solutions exists such that the objective tends towards innity. An example of a dual
infeasible and primal unbounded problem is:
minimize x
1
subject to x
1
5.
(12.5)
To resolve a dual infeasibility the primal problem must be made more restricted by
Adding upper or lower bounds on variables or constraints.
Removing variables.
Changing the objective.
12.2.3.1 A cautious note
The problem
minimize 0
subject to 0 x
1
,
x
j
x
j+1
, j = 1, . . . , n 1,
x
n
1
(12.6)
is clearly infeasible. Moreover, if any one of the constraints are dropped, then the problem becomes
feasible.
This illustrates the worst case scenario that all, or at least a signicant portion, of the constraints
are involved in the infeasibility. Hence, it may not always be easy or possible to pinpoint a few
constraints which are causing the infeasibility.
12.2.4 The infeasibility report
MOSEK includes functionality for diagnosing the cause of a primal or a dual infeasibility. It can be
turned on by setting the MSK IPAR INFEAS REPORT AUTO to MSK ON. This causes MOSEK to print a
report on variables and constraints involved in the infeasibility.
The MSK IPAR INFEAS REPORT LEVEL parameter controls the amount of information presented in
the infeasibility report. The default value is 1.
86 CHAPTER 12. THE ANALYZERS
12.2.4.1 Example: Primal infeasibility
We will reuse the example (12.3) located in infeas.lp:
\
\ An example of an infeasible linear problem.
\
minimize
obj: + 1 x11 + 2 x12 + 1 x13
+ 4 x21 + 2 x22 + 5 x23
+ 4 x31 + 1 x32 + 2 x33
st
s0: + x11 + x12 <= 200
s1: + x23 + x24 <= 1000
s2: + x31 +x33 + x34 <= 1000
d1: + x11 + x31 = 1100
d2: + x12 = 200
d3: + x23 + x33 = 500
d4: + x24 + x34 = 500
bounds
end
Using the command line
mosek -d MSK_IPAR_INFEAS_REPORT_AUTO MSK_ON infeas.lp
MOSEK produces the following infeasibility report
MOSEK PRIMAL INFEASIBILITY REPORT.
Problem status: The problem is primal infeasible
The following constraints are involved in the primal infeasibility.
Index Name Lower bound Upper bound Dual lower Dual upper
0 s0 NONE 2.000000e+002 0.000000e+000 1.000000e+000
2 s2 NONE 1.000000e+003 0.000000e+000 1.000000e+000
3 d1 1.100000e+003 1.100000e+003 1.000000e+000 0.000000e+000
4 d2 2.000000e+002 2.000000e+002 1.000000e+000 0.000000e+000
The following bound constraints are involved in the infeasibility.
Index Name Lower bound Upper bound Dual lower Dual upper
8 x33 0.000000e+000 NONE 1.000000e+000 0.000000e+000
10 x34 0.000000e+000 NONE 1.000000e+000 0.000000e+000
The infeasibility report is divided into two sections where the rst section shows which constraints that
are important for the infeasibility. In this case the important constraints are the ones named s0, s2, d1,
12.2. ANALYZING INFEASIBLE PROBLEMS 87
and d2. The values in the columns Dual lower and Dual upper are also useful, since a non-zero
dual lower value for a constraint implies that the lower bound on the constraint is important for the
infeasibility. Similarly, a non-zero dual upper value implies that the upper bound on the constraint is
important for the infeasibility.
It is also possible to obtain the infeasible subproblem. The command line
mosek -d MSK_IPAR_INFEAS_REPORT_AUTO MSK_ON infeas.lp -info rinfeas.lp
produces the les rinfeas.bas.inf.lp. In this case the content of the le rinfeas.bas.inf.lp is
minimize
Obj: + CFIXVAR
st
s0: + x11 + x12 <= 200
s2: + x31 + x33 + x34 <= 1e+003
d1: + x11 + x31 = 1.1e+003
d2: + x12 = 200
bounds
x11 free
x12 free
x13 free
x21 free
x22 free
x23 free
x31 free
x32 free
x24 free
CFIXVAR = 0e+000
end
which is an optimization problem. This problem is identical to (12.3), except that the objective and
some of the constraints and bounds have been removed. Executing the command
mosek -d MSK_IPAR_INFEAS_REPORT_AUTO MSK_ON rinfeas.bas.inf.lp
demonstrates that the reduced problem is primal infeasible. Since the reduced problem is usually
smaller than original problem, it should be easier to locate the cause of the infeasibility in this rather
than in the original (12.3).
12.2.4.2 Example: Dual infeasibility
The example problem
maximize - 200 y1 - 1000 y2 - 1000 y3
- 1100 y4 - 200 y5 - 500 y6
- 500 y7
subject to
x11: y1+y4 < 1
x12: y1+y5 < 2
88 CHAPTER 12. THE ANALYZERS
x23: y2+y6 < 5
x24: y2+y7 < 2
x31: y3+y4 < 1
x33: y3+y6 < 2
x44: y3+y7 < 1
bounds
y1 < 0
y2 < 0
y3 < 0
y4 free
y5 free
y6 free
y7 free
end
is dual infeasible. This can be veried by proving that
y1=-1, y2=-1, y3=0, y4=1, y5=1
is a certicate of dual infeasibility. In this example the following infeasibility report is produced
(slightly edited):
The following constraints are involved in the infeasibility.
Index Name Activity Objective Lower bound Upper bound
0 x11 -1.000000e+00 NONE 1.000000e+00
4 x31 -1.000000e+00 NONE 1.000000e+00
The following variables are involved in the infeasibility.
Index Name Activity Objective Lower bound Upper bound
3 y4 -1.000000e+00 -1.100000e+03 NONE NONE
Interior-point solution
Problem status : DUAL_INFEASIBLE
Solution status : DUAL_INFEASIBLE_CER
Primal - objective: 1.1000000000e+03 eq. infeas.: 0.00e+00 max bound infeas.: 0.00e+00 cone infeas.: 0.00e+00
Dual - objective: 0.0000000000e+00 eq. infeas.: 0.00e+00 max bound infeas.: 0.00e+00 cone infeas.: 0.00e+00
Let x
is approximately zero.
Since it was an maximization problem, this implies that
c
t
x
> 0. (12.7)
For a minimization problem this inequality would have been reversed see (12.19).
From the infeasibility report we see that the variable y4, and the constraints x11 and x33 are
involved in the infeasibility since these appear with non-zero values in the Activity column.
One possible strategy to x the infeasibility is to modify the problem so that the certicate of
infeasibility becomes invalid. In this case we may do one the the following things:
12.2. ANALYZING INFEASIBLE PROBLEMS 89
Put a lower bound in y3. This will directly invalidate the certicate of dual infeasibility.
Increase the object coecient of y3. Changing the coecients suciently will invalidate the
inequality (12.7) and thus the certicate.
Put lower bounds on x11 or x31. This will directly invalidate the certicate of infeasibility.
Please note that modifying the problem to invalidate the reported certicate does not imply that the
problem becomes dual feasible the infeasibility may simply move, resulting in a new infeasibility.
More often, the reported certicate can be used to give a hint about errors or inconsistencies in
the model that produced the problem.
12.2.5 Theory concerning infeasible problems
This section discusses the theory of infeasibility certicates and how MOSEK uses a certicate to
produce an infeasibility report. In general, MOSEK solves the problem
minimize c
T
x + c
f
subject to l
c
Ax u
c
,
l
x
x u
x
(12.8)
where the corresponding dual problem is
maximize (l
c
)
T
s
c
l
(u
c
)
T
s
c
u
+(l
x
)
T
s
x
l
(u
x
)
T
s
x
u
+ c
f
subject to A
T
y + s
x
l
s
x
u
= c,
y + s
c
l
s
c
u
= 0,
s
c
l
, s
c
u
, s
x
l
, s
x
u
0.
(12.9)
We use the convension that for any bound that is not nite, the corresponding dual variable is xed
at zero (and thus will have no inuence on the dual problem). For example
l
x
j
= (s
x
l
)
j
= 0 (12.10)
12.2.6 The certicate of primal infeasibility
A certicate of primal infeasibility is any solution to the homogenized dual problem
maximize (l
c
)
T
s
c
l
(u
c
)
T
s
c
u
+(l
x
)
T
s
x
l
(u
x
)
T
s
x
u
subject to A
T
y + s
x
l
s
x
u
= 0,
y + s
c
l
s
c
u
= 0,
s
c
l
, s
c
u
, s
x
l
, s
x
u
0.
(12.11)
with a positive objective value. That is, (s
c
l
, s
c
u
, s
x
l
, s
x
u
) is a certicate of primal infeasibility if
(l
c
)
T
s
c
l
(u
c
)
T
s
c
u
+ (l
x
)
T
s
x
l
(u
x
)
T
s
x
u
> 0 (12.12)
and
A
T
y + s
x
l
s
x
u
= 0,
y + s
c
l
s
c
u
= 0,
s
c
l
, s
c
u
, s
x
l
, s
x
u
0.
(12.13)
90 CHAPTER 12. THE ANALYZERS
The well-known Farkas Lemma tells us that (12.8) is infeasible if and only if a certicate of primal
infeasibility exists.
Let (s
c
l
, s
c
u
, s
x
l
, s
x
u
) be a certicate of primal infeasibility then
(s
c
l
)
i
> 0 ((s
c
u
)
i
> 0) (12.14)
implies that the lower (upper) bound on the ith constraint is important for the infeasibility. Further-
more,
(s
x
l
)
j
> 0 ((s
x
u
)
i
> 0) (12.15)
implies that the lower (upper) bound on the jth variable is important for the infeasibility.
12.2.7 The certicate of dual infeasibility
A certicate of dual infeasibility is any solution to the problem
minimize c
T
x
subject to
l
c
Ax u
c
,
l
x
x u
x
(12.16)
with negative objective value, where we use the denitions
l
c
i
:=
_
0, l
c
i
> ,
, otherwise,
u
c
i
:=
_
0, u
c
i
< ,
, otherwise,
(12.17)
and
l
x
i
:=
_
0, l
x
i
> ,
, otherwise,
and u
x
i
:=
_
0, u
x
i
< ,
, otherwise.
(12.18)
Stated dierently, a certicate of dual infeasibility is any x
such that
c
T
x
< 0,
l
c
Ax
u
c
,
l
x
x
u
x
(12.19)
The well-known Farkas Lemma tells us that (12.9) is infeasible if and only if a certicate of dual
infeasibility exists.
Note that if x
j
,= 0, (12.20)
variable j is involved in the dual infeasibility.
Chapter 13
Feasibility repair
Section 12.2.2 discusses how MOSEK treats infeasible problems. In particular, it is discussed which
information MOSEK returns when a problem is infeasible and how this information can be used to
pinpoint the elements causing the infeasibility.
In this section we will discuss a method for repairing a primal infeasible problem by relaxing the
constraints in a controlled way. For the sake of simplicity we discuss the method in the context
of linear optimization. MOSEK can also repair infeasibilities in quadratic and conic optimization
problems possibly having integer constrained variables. Please note that infeasibilities in nonlinear
optimization problems cant be repaired using the method described below.
13.1 The main idea
Consider the linear optimization problem with m constraints and n variables
minimize c
T
x + c
f
subject to l
c
Ax u
c
,
l
x
x u
x
,
(13.1)
which we assume is infeasible. Moreover, we assume that
(l
c
)
i
(u
c
)
i
, i (13.2)
and
(l
x
)
j
(u
x
)
j
, j (13.3)
because otherwise the problem (13.1) is trivially infeasible.
One way of making the problem feasible is to reduce the lower bounds and increase the upper
bounds. If the change is suciently large the problem becomes feasible.
One obvious question is: What is the smallest change to the bounds that will make the problem
feasible?
We associate a weight with each bound:
w
c
l
R
m
(associated with l
c
),
91
92 CHAPTER 13. FEASIBILITY REPAIR
w
c
u
R
m
(associated with u
c
),
w
x
l
R
n
(associated with l
x
),
w
x
u
R
n
(associated with u
x
),
Now, the problem
minimize p
subject to l
c
Ax + v
c
l
v
c
u
u
c
,
l
x
x + v
x
l
v
x
u
u
x
,
(w
c
l
)
T
v
c
l
+ (w
c
u
)
T
v
c
u
+ (w
x
l
)
T
v
x
l
+ (w
x
u
)
T
v
x
u
p 0,
v
c
l
, v
c
u
, v
x
l
, v
x
u
0
(13.4)
minimizes the weighted sum of changes to the bounds that makes the problem feasible. The variables
(v
c
l
)
i
, (v
c
u
)
i
, (v
x
l
)
i
and (v
c
u
)
i
are elasticity variables because they allow a constraint to be violated and
hence add some elasticity to the problem. For instance, the elasticity variable (v
c
l
)
i
shows how much
the lower bound (l
c
)
i
should be relaxed to make the problem feasible. Since p is minimized and
(w
c
l
)
T
v
c
l
+ (w
c
u
)
T
v
c
u
+ (w
x
l
)
T
v
x
l
+ (w
x
u
)
T
v
x
u
p 0, (13.5)
a large (w
c
l
)
i
tends to imply that the elasticity variable (v
c
l
)
i
will be small in an optimal solution.
The reader may want to verify that the problem (13.4) is always feasible given the assumptions
(13.2) and (13.3).
Please note that if a weight is negative then the resulting problem (13.4) is unbounded.
The weights w
c
l
, w
c
u
, w
x
l
, and w
x
u
can be regarded as a costs (penalties) for violating the associated
constraints. Thus a higher weight implies that higher priority is given to the satisfaction of the
associated constraint.
The main idea can now be presented as follows. If you have an infeasible problem, then form
the problem (13.4) and optimize it. Next inspect the optimal solution (v
c
l
)
, (v
c
u
)
, (v
x
l
)
, and (v
x
u
)
to problem (13.4). This solution provides a suggested relaxation of the bounds that will make the
problem feasible.
Assume that p
,
v
c
l
, v
c
u
, v
x
l
, v
x
u
0
(13.6)
which minimizes the true objective while making sure that total weighted violations of the bounds is
minimal, i.e. equals to p
.
13.2 Feasibility repair in MOSEK
MOSEK includes functionality that help you construct the problem (13.4) simply by passing a set of
weights to MOSEK. This can be used for linear, quadratic, and conic optimization problems, possibly
having integer constrained variables.
13.2. FEASIBILITY REPAIR IN MOSEK 93
13.2.1 Usage of negative weights
As the problem (13.4) is presented it does not make sense to use negative weights since that makes
the problem unbounded. Therefore, if the value of a weight is negative MOSEK xes the associated
elasticity variable to zero, e.g. if
(w
c
l
)
i
< 0
then MOSEK imposes the bound
(v
c
l
)
i
0.
This implies that the lower bound on the ith constraint will not be violated. (Clearly, this could
also imply that the problem is infeasible so negative weight should be used with care). Associating a
negative weights with a constraint tells MOSEK that the constraint should not be relaxed.
13.2.2 Automatical naming
MOSEK can automatically create a new problem of the form (13.4) starting from an existing problem
by adding the elasticity variables and the extra constraints. Specicly, the variables v
c
l
, v
c
u
, v
x
l
, v
x
u
, and
p are appended to existing variable vector x in their natural order. Moreover, the constraint (13.5) is
appended to the constraints.
The new variables and constraints are automatically given names as follows:
The names of the variables (v
c
l
)
i
and (v
c
u
)
i
are constructed from the name of the ith constraint.
For instance, if the 9th original constraint is named c9, then by default (v
c
l
)
9
and (v
c
u
)
9
are given
the names LO*c9 and UP*c9 respectively. If necessary, the character * can be replaced by a
dierent string by changing the
MSK SPAR FEASREPAIR NAME SEPARATOR
parameter.
The additional constraints
l
x
x + v
x
l
v
x
u
u
x
are given names as follows. There is exactly one constraint per variable in the original problem,
and thus the ith of these constraints is named after the ith variable in the original problem.
For instance, if the rst original variable is named x0, then the rst of the above constraints
is named MSK-x1. If necessary, the prex MSK- can be replaced by a dierent string by
changing the
MSK SPAR FEASREPAIR NAME PREFIX
parameter.
The variable p is by default given the name WSUMVIOLVAR, and the constraint (13.5) is given the
name WSUMVIOLCON.
The substring WSUMVIOL can be replaced by a dierent string by changing the
MSK SPAR FEASREPAIR NAME WSUMVIOL
parameter.
94 CHAPTER 13. FEASIBILITY REPAIR
13.2.3 An example
Consider the example linear optimization
minimize 10x
1
9x
2
,
subject to 7/10x
1
+ 1x
2
630,
1/2x
1
+ 5/6x
2
600,
1x
1
+ 2/3x
2
708,
1/10x
1
+ 1/4x
2
135,
x
1
, x
2
0.
x
2
650
(13.7)
This is an infeasible problem. Now suppose we wish to use MOSEK to suggest a modication to the
bounds that makes the problem feasible.
The command
mosek -d MSK_IPAR_FEASREPAIR_OPTIMIZE
MSK_FEASREPAIR_OPTIMIZE_PENALTY -d
MSK_IPAR_OPF_WRITE_SOLUTIONS MSK_ON feasrepair.lp
-infrepo minv.opf
writes the problem (13.4) and its solution to an OPF formatted le. In this case the le minv.opf.
The parameter
MSK_IPAR_FEASREPAIR_OPTIMIZE
controls whether the function returns the problem (13.4) or the problem (13.6). In the case
MSK_IPAR_FEASREPAIR_OPTIMIZE
is equal to
MSK_FEASREPAIR_OPTIMIZE_NONE
then (13.4) is returned, but the problem is not solved. For MSK FEASREPAIR OPTIMIZE PENALTY the
problem (13.4) is returned and solved. Finally for MSK FEASREPAIR OPTIMIZE COMBINED (13.6) is
returned and solved.
Chapter 14
Sensitivity analysis
14.1 Introduction
Given an optimization problem it is often useful to obtain information about how the optimal objective
value change when the problem parameters are perturbed. For instance assume that a bound represents
a capacity of a machine. Now, it may be possible to expand the capacity for a certain cost and hence
it worthwhile to know what the value of additional capacity is. This is precisely the type of questions
sensitivity analysis deals with.
Analyzing how the optimal objective value changes when the problem data is changed is called
sensitivity analysis.
14.2 Restrictions
Currently, sensitivity analysis is only available for continuous linear optimization problems. Moreover,
MOSEK can only deal with perturbations in bounds or objective coecients.
14.3 References
The book [12] discusses the classical sensitivity analysis in Chapter 10 whereas the book [19, Chapter
19] presents a modern introduction to sensitivity analysis. Finally, it is recommended to read the short
paper [21] to avoid some of the pitfalls associated with sensitivity analysis.
14.4 Sensitivity analysis for linear problems
14.4.1 The optimal objective value function
Assume that we are given the problem
z(l
c
, u
c
, l
x
, u
x
, c) = minimize c
T
x
subject to l
c
Ax u
c
,
l
x
x u
x
,
(14.1)
95
96 CHAPTER 14. SENSITIVITY ANALYSIS
and we want to know how the optimal objective value changes as l
c
i
is perturbed. In order to answer
this question then dene the perturbed problem for l
c
i
as follows
f
l
c
i
() = minimize c
T
x
subject to l
c
+ e
i
Ax u
c
,
l
x
x u
x
,
(14.2)
where e
i
is the ith column of the identity matrix. The function
f
l
c
i
() (14.3)
shows the optimal objective value as a function of . Note that a change in corresponds to a
perturbation in l
c
i
and hence (14.3) shows the optimal objective value as a function of l
c
i
.
It is possible to prove that the function (14.3) is a piecewise linear and convex function i.e. the
function may look like the illustration in Figure 14.1.
f( )
0
1
2
f( )
0
2 1
Figure 14.1: The optimal value function f
l
c
i
(). Left: = 0 is in the interior of linearity interval.
Right: = 0 is a breakpoint.
Clearly, if the function f
l
c
i
() does not change much when is changed, then we can conclude that
the optimal objective value is insensitive to changes in l
c
i
. Therefore, we are interested in how f
l
c
i
()
changes for small changes in . Now dene
f
l
c
i
(0) (14.4)
to be the so called shadow price related to l
c
i
. The shadow price species how the objective value
changes for small changes in around zero. Moreover, we are interested in the so called linearity
interval
[
1
,
2
] (14.5)
for which
f
l
c
i
() = f
l
c
i
(0). (14.6)
To summarize the sensitivity analysis provides a shadow price and the linearity interval in which
the shadow price is constant.
The reader may have noticed that we are sloppy in the denition of the shadow price. The reason
is that the shadow price is not dened in the right example in Figure 14.1 because the function f
l
c
i
()
is not dierentiable for = 0. However, in that case we can dene a left and a right shadow price and
a left and a right linearity interval.
14.4. SENSITIVITY ANALYSIS FOR LINEAR PROBLEMS 97
In the above discussion we only discussed changes in l
c
i
. We dene the other optimal objective
value functions as follows
f
u
c
i
() = z(l
c
, u
c
+ e
i
, l
x
, u
x
, c), i = 1, . . . , m,
f
l
x
j
() = z(l
c
, u
c
, l
x
+ e
j
, u
x
, c), j = 1, . . . , n,
f
u
x
j
() = z(l
c
, u
c
, l
x
, u
x
+ e
j
, c), j = 1, . . . , n,
f
cj
() = z(l
c
, u
c
, l
x
, u
x
, c + e
j
), j = 1, . . . , n.
(14.7)
Given these denitions it should be clear how linearity intervals and shadow prices are dened for the
parameters u
c
i
etc.
14.4.1.1 Equality constraints
In MOSEK a constraint can be specied as either an equality constraints or a ranged constraints.
Suppose constraint i is an equality constraint. We then dene the optimal value function for constraint
i by
f
e
c
i
() = z(l
c
+ e
i
, u
c
+ e
i
, l
x
, u
x
, c) (14.8)
Thus for a equality constraint the upper and lower bound (which are equal) are perturbed simultane-
ously. From the point of view of MOSEK sensitivity analysis a ranged constrain with l
c
i
= u
c
i
therefore
diers from an equality constraint.
14.4.2 The basis type sensitivity analysis
The classical sensitivity analysis discussed in most textbooks about linear optimization, e.g. [12,
Chapter 10], is based on an optimal basic solution or equivalently on an optimal basis. This method
may produce misleading results [19, Chapter 19] but is computationally cheap. Therefore, and for
historical reasons this method is available in MOSEK.
We will now briey discuss the basis type sensitivity analysis. Given an optimal basic solution
which provides a partition of variables into basic and non-basic variables then the basis type sensitivity
analysis computes the linearity interval [
1
,
2
] such that the basis remains optimal for the perturbed
problem. A shadow price associated with the linearity interval is also computed. However, it is well-
known that an optimal basic solution may not be unique and therefore the result depends on the
optimal basic solution employed in the sensitivity analysis. This implies that the computed interval is
only a subset of the largest interval for which the shadow price is constant. Furthermore, the optimal
objective value function might have a breakpoint for = 0. In this case the basis type sensitivity
method will only provide a subset of either the left or the right linearity interval.
In summary the basis type sensitivity analysis is computationally cheap but does not provide
complete information. Hence, the results of the basis type sensitivity analysis should be used with
care.
14.4.3 The optimal partition type sensitivity analysis
Another method for computing the complete linearity interval is called the optimal partition type
sensitivity analysis. The main drawback to the optimal partition type sensitivity analysis is it is
computationally expensive. This type of sensitivity analysis is currently provided as an experimental
feature in MOSEK.
98 CHAPTER 14. SENSITIVITY ANALYSIS
Given optimal primal and dual solutions to (14.1) i.e. x
and ((s
c
l
)
, (s
c
u
)
, (s
x
l
)
, (s
x
u
)
) then the
optimal objective value is given by
z
:= c
T
x
. (14.9)
The left and right shadow prices
1
and
2
for l
c
i
is given by the pair of optimization problems
1
= minimize e
T
i
s
c
l
subject to A
T
(s
c
l
s
c
u
) + s
x
l
s
x
u
= c,
(l
c
)
T
(s
c
l
) (u
c
)
T
(s
c
u
) + (l
x
)
T
(s
x
l
) (u
x
)
T
(s
x
u
) = z
,
s
c
l
, s
c
u
, s
c
l
, s
x
u
0
(14.10)
and
2
= maximize e
T
i
s
c
l
subject to A
T
(s
c
l
s
c
u
) + s
x
l
s
x
u
= c,
(l
c
)
T
(s
c
l
) (u
c
)
T
(s
c
u
) + (l
x
)
T
(s
x
l
) (u
x
)
T
(s
x
u
) = z
,
s
c
l
, s
c
u
, s
c
l
, s
x
u
0.
(14.11)
The above two optimization problems makes it easy to interpret-ate the shadow price. Indeed assume
that ((s
c
l
)
, (s
c
u
)
, (s
x
l
)
, (s
x
u
)
i
[
1
,
2
]. (14.12)
Next the linearity interval [
1
,
2
] for l
c
i
is computed by solving the two optimization problems
1
= minimize
subject to l
c
+ e
i
Ax u
c
,
c
T
x
1
= z
,
l
x
x u
x
,
(14.13)
and
2
= maximize
subject to l
c
+ e
i
Ax u
c
,
c
T
x
2
= z
,
l
x
x u
x
.
(14.14)
The linearity intervals and shadow prices for u
c
i
, l
x
j
, and u
x
j
can be computed in a similar way to
how it is computed for l
c
i
.
The left and right shadow price for c
j
denoted
1
and
2
respectively is given by the pair optimiza-
tion problems
1
= minimize e
T
j
x
subject to l
c
+ e
i
Ax u
c
,
c
T
x = z
,
l
x
x u
x
(14.15)
and
2
= maximize e
T
j
x
subject to l
c
+ e
i
Ax u
c
,
c
T
x = z
,
l
x
x u
x
.
(14.16)
14.4. SENSITIVITY ANALYSIS FOR LINEAR PROBLEMS 99
Once again the above two optimization problems makes it easy to interpret-ate the shadow prices.
Indeed assume that x
j
[
1
,
2
]. (14.17)
The linearity interval [
1
,
2
] for a c
j
is computed as follows
1
= minimize
subject to A
T
(s
c
l
s
c
u
) + s
x
l
s
x
u
= c + e
j
,
(l
c
)
T
(s
c
l
) (u
c
)
T
(s
c
u
) + (l
x
)
T
(s
x
l
) (u
x
)
T
(s
x
u
)
1
z
,
s
c
l
, s
c
u
, s
c
l
, s
x
u
0
(14.18)
and
2
= maximize
subject to A
T
(s
c
l
s
c
u
) + s
x
l
s
x
u
= c + e
j
,
(l
c
)
T
(s
c
l
) (u
c
)
T
(s
c
u
) + (l
x
)
T
(s
x
l
) (u
x
)
T
(s
x
u
)
2
z
,
s
c
l
, s
c
u
, s
c
l
, s
x
u
0.
(14.19)
14.4.4 An example
As an example we will use the following transportation problem. Consider the problem of minimizing
the transportation cost between a number of production plants and stores. Each plant supplies a
number of goods and each store has a given demand that must be met. Supply, demand and cost of
transportation per unit are shown in Figure 14.2.
If we denote the number of transported goods from location i to location j by x
ij
, the problem can
be formulated as the linear optimization problem
minimize
1x
11
+ 2x
12
+ 5x
23
+ 2x
24
+ 1x
31
+ 2x
33
+ 1x
34
(14.20)
subject to
x
11
+ x
12
400,
x
23
+ x
24
1200,
x
31
+ x
33
+ x
34
1000,
x
11
+ x
31
= 800,
x
12
= 100,
x
23
+ x
33
= 500,
x
24
+ x
34
= 500,
x
11
, x
12
, x
23
, x
24
, x
31
, x
33
, x
34
0.
(14.21)
The basis type and the optimal partition type sensitivity results for the transportation problem is
shown in Table 14.1 and 14.2 respectively.
Looking at the results from the optimal partition type sensitivity analysis we see that for the
constraint number 1 we have
1
,=
2
and
1
,=
2
. Therefore, we have a left linearity interval of
[300, 0] and a right interval of [0, 500]. The corresponding left and right shadow price is 3 and 1
respectively. This implies that if the upper bound on constraint 1 increases by
[0,
1
] = [0, 500] (14.22)
100 CHAPTER 14. SENSITIVITY ANALYSIS
Supply Demand
1
2
5
2
1
2
1
Plant 1
Plant 2
Plant 3
Store 4
Store 3
Store 2
Store 1
1000
500
500
800
100
400
1200
Figure 14.2: Supply, demand and cost of transportation.
14.4. SENSITIVITY ANALYSIS FOR LINEAR PROBLEMS 101
Basis type
Con.
1
2
1
2
1 300.00 0.00 3.00 3.00
2 700.00 + 0.00 0.00
3 500.00 0.00 3.00 3.00
4 0.00 500.00 4.00 4.00
5 0.00 300.00 5.00 5.00
6 0.00 700.00 5.00 5.00
7 500.00 700.00 2.00 2.00
Var.
1
2
1
2
x
11
300.00 0.00 0.00
x
12
100.00 0.00 0.00
x
23
0.00 0.00 0.00
x
24
500.00 0.00 0.00
x
31
500.00 0.00 0.00
x
33
500.00 0.00 0.00
x
34
0.000000 500.00 2.00 2.00
Optimal partition type
Con.
1
2
1
2
1 300.00 500.00 3.00 1.00
2 700.00 + 0.00 0.00
3 500.00 500.00 3.00 1.00
4 500.00 500.00 2.00 4.00
5 100.00 300.00 3.00 5.00
6 500.00 700.00 3.00 5.00
7 500.00 700.00 2.00 2.00
Var.
1
2
1
2
x
11
300.00 0.00 0.00
x
12
100.00 0.00 0.00
x
23
500.00 0.00 2.00
x
24
500.00 0.00 0.00
x
31
500.00 0.00 0.00
x
33
500.00 0.00 0.00
x
34
500.00 0.00 2.00
Table 14.1: Ranges and shadow prices related to bounds on constraints and variables. Left: Results
for basis type sensitivity analysis. Right: Results for the optimal partition type sensitivity analysis.
Basis type
Var.
1
2
1
2
c
1
3.00 300.00 300.00
c
2
100.00 100.00
c
3
2.00 0.00 0.00
c
4
2.00 500.00 500.00
c
5
3.00 500.00 500.00
c
6
2.00 500.00 500.00
c
7
2.00 0.00 0.00
Optimal partition type
Var.
1
2
1
2
c
1
3.00 300.00 300.00
c
2
100.00 100.00
c
3
2.00 0.00 0.00
c
4
2.00 500.00 500.00
c
5
3.00 500.00 500.00
c
6
2.00 500.00 500.00
c
7
2.00 0.00 0.00
Table 14.2: Ranges and shadow prices related to the objective coecients. Left: Results for basis type
sensitivity analysis. Right: Results for the optimal partition type sensitivity analysis.
102 CHAPTER 14. SENSITIVITY ANALYSIS
then the optimal objective value will decrease by the value
2
= 1. (14.23)
Correspondingly, if the upper bound on constraint 1 is decreased by
[0, 300] (14.24)
then the optimal objective value will increased by the value
1
= 3. (14.25)
14.5 Sensitivity analysis with the command line tool
A sensitivity analysis can be performed with the MOSEK command line tool using the command
mosek myproblem.mps -sen sensitivity.ssp
where sensitivity.ssp is a le in the format described in the next section. The ssp le describes
which parts of the problem the sensitivity analysis should be performed on.
By default results are written to a le named myproblem.sen. If necessary, this lename can be
changed by setting the
MSK SPAR SENSITIVITY RES FILE NAME
parameter By default a basis type sensitivity analysis is performed. However, the type of sensitivity
analysis (basis or optimal partition) can be changed by setting the parameter
MSK IPAR SENSITIVITY TYPE
appropriately. Following values are accepted for this parameter:
MSK SENSITIVITY TYPE BASIS
MSK SENSITIVITY TYPE OPTIMAL PARTITION
It is also possible to use the command line
mosek myproblem.mps -d MSK_IPAR_SENSITIVITY_ALL MSK_ON
in which case a sensitivity analysis on all the parameters is performed.
14.5.1 Sensitivity analysis specication le
MOSEK employs an MPS like le format to specify on which model parameters the sensitivity anal-
ysis should be performed. As the optimal partition type sensitivity analysis can be computationally
expensive it is important to limit the sensitivity analysis.
The format of the sensitivity specication le is shown in gure 14.3, where capitalized names are
keywords, and names in brackets are names of the constraints and variables to be included in the
analysis.
The sensitivity specication le has three sections, i.e.
BOUNDS CONSTRAINTS: Species on which bounds on constraints the sensitivity analysis should
be performed.
14.5. SENSITIVITY ANALYSIS WITH THE COMMAND LINE TOOL 103
* A comment
BOUNDS CONSTRAINTS
U|L|LU [cname1]
U|L|LU [cname2]-[cname3]
BOUNDS VARIABLES
U|L|LU [vname1]
U|L|LU [vname2]-[vname3]
OBJECTIVE VARIABLES
[vname1]
[vname2]-[vname3]
Figure 14.3: The sensitivity analysis le format.
BOUNDS VARIABLES: Species on which bounds on variables the sensitivity analysis should be
performed.
OBJECTIVE VARIABLES: Species on which objective coecients the sensitivity analysis should
be performed.
A line in the body of a section must begin with a whitespace. In the BOUNDS sections one of the keys
L, U, and LU must appear next. These keys specify whether the sensitivity analysis is performed on
the lower bound, on the upper bound, or on both the lower and the upper bound respectively. Next,
a single constraint (variable) or range of constraints (variables) is specied.
Recall from Section 14.4.1.1 that equality constraints are handled in a special way. Sensitivity
analysis of an equality constraint can be specied with either L, U, or LU, all indicating the same,
namely that upper and lower bounds (which are equal) are perturbed simultaneously.
As an example consider
BOUNDS CONSTRAINTS
L "cons1"
U "cons2"
LU "cons3"-"cons6"
which requests that sensitivity analysis is performed on the lower bound of the constraint named
cons1, on the upper bound of the constraint named cons2, and on both lower and upper bound on
the constraints named cons3 to cons6.
It is allowed to use indexes instead of names, for instance
BOUNDS CONSTRAINTS
L "cons1"
U 2
LU 3 - 6
The character * indicates that the line contains a comment and is ignored.
14.5.2 Example: Sensitivity analysis from command line
As an example consider the sensitivity.ssp le shown in Figure 14.4.
The command
104 CHAPTER 14. SENSITIVITY ANALYSIS
* Comment 1
BOUNDS CONSTRAINTS
U "c1" * Analyze upper bound for constraint named c1
U 2 * Analyze upper bound for the second constraint
U 3-5 * Analyze upper bound for constraint number 3 to number 5
BOUNDS VARIABLES
L 2-4 * This section specifies which bounds on variables should be analyzed
L "x11"
OBJECTIVE VARIABLES
"x11" * This section specifies which objective coefficients should be analyzed
2
Figure 14.4: Example of the sensitivity le format.
mosek transport.lp -sen sensitivity.ssp -d MSK_IPAR_SENSITIVITY_TYPE MSK_SENSITIVITY_TYPE_BASIS
produces the transport.sen le shown below.
BOUNDS CONSTRAINTS
INDEX NAME BOUND LEFTRANGE RIGHTRANGE LEFTPRICE RIGHTPRICE
0 c1 UP -6.574875e-18 5.000000e+02 1.000000e+00 1.000000e+00
2 c3 UP -6.574875e-18 5.000000e+02 1.000000e+00 1.000000e+00
3 c4 FIX -5.000000e+02 6.574875e-18 2.000000e+00 2.000000e+00
4 c5 FIX -1.000000e+02 6.574875e-18 3.000000e+00 3.000000e+00
5 c6 FIX -5.000000e+02 6.574875e-18 3.000000e+00 3.000000e+00
BOUNDS VARIABLES
INDEX NAME BOUND LEFTRANGE RIGHTRANGE LEFTPRICE RIGHTPRICE
2 x23 LO -6.574875e-18 5.000000e+02 2.000000e+00 2.000000e+00
3 x24 LO -inf 5.000000e+02 0.000000e+00 0.000000e+00
4 x31 LO -inf 5.000000e+02 0.000000e+00 0.000000e+00
0 x11 LO -inf 3.000000e+02 0.000000e+00 0.000000e+00
OBJECTIVE VARIABLES
INDEX NAME LEFTRANGE RIGHTRANGE LEFTPRICE RIGHTPRICE
0 x11 -inf 1.000000e+00 3.000000e+02 3.000000e+02
2 x23 -2.000000e+00 +inf 0.000000e+00 0.000000e+00
14.5.3 Controlling log output
Setting the parameter
MSK_IPAR_LOG_SENSITIVITY
to 1 or 0 (default) controls whether or not the results from sensitivity calculations are printed to the
message stream.
The parameter
MSK_IPAR_LOG_SENSITIVITY_OPT
controls the amount of debug information on internal calculations from the sensitivity analysis.
Appendix A
MOSEK command line tool
reference
A.1 Introduction
The MOSEK command line tool is used to solve optimization problems from the operating system
command line. It is invoked as follows
mosek [options] [filename]
where both [options] and [filename] are optional arguments. [filename] is a le describing the
optimization problems and is either a MPS le or AMPL nl le. [options] consists of command line
arguments that modies the behavior of MOSEK.
A.2 Command line arguments
The following list shows the possible command-line arguments for MOSEK:
-a MOSEK runs in AMPL mode.
-AMPL The input le is an AMPL nl le.
-basi name Input basis solution le name.
-baso name Output basis solution le name.
-brni name name is the lename of a variable branch order le to be read.
-brno name name is the lename of a variable branch order le to be written.
-d name val Assigns the value val to the parameter named name.
-dbgmem name Name of memory debug le. Write memory debug information to le name.
-f Complete license information is printed.
105
106 APPENDIX A. MOSEK COMMAND LINE TOOL REFERENCE
-h Prints out help information for MOSEK.
-inti name Input integer solution le name.
-into name Output integer solution le name.
-itri name Input interior point solution le name.
-itro name Output interior point solution le name.
-info name Infeasible subproblem output le name.
-infrepo name Feasibility reparation output le
-pari name Input parameter le name. Equivalent to -p.
-paro name Output parameter le name.
-L name name of the license le.
-l name name of the license le.
-max Forces MOSEK to maximize the objective.
-min Forces MOSEK to minimize the objective.
-n Ignore errors in subsequent paramter settings.
-p name New parameter settings are read from a le named name.
-q name Name of a optional log le.
-r If the option is present, the program returns -1 if an error ocurred otherwise 0.
-rout name If the option is present, the program writes the return code to le name.
-sen file Perform sensitivity analysis based on le.
-silent As little information as possible is send to the terminal.
-v The MOSEK version number is printed and no optimization is performed.
-w If this options is included, then MOSEK will wait for a license.
-= Lists the parameter database.
-? Same as the -h option.
A.3. THE PARAMETER FILE 107
A.3 The parameter le
Occasionally system or algorithmic parameters in MOSEK should be changed be the user. One way
of the changing parameters is to use a so-called parameter le which is a plain text le. It can for
example can have the format
BEGIN MOSEK
% This is a comment.
% The subsequent line tells MOSEK that an optimal
% basis should be computed by the interior-point optimizer.
MSK_IPAR_INTPNT_BASIS MSK_BI_ALWAYS
MSK_DPAR_INTPNT_TOL_PFEAS 1.0e-9
END MOSEK
Note that the le begins with an BEGIN MOSEK and is terminated with an END MOSEK, this is required.
Moreover, everything that appears after an % is considered to be a comment and is ignored. Similarly,
empty lines are ignored. The important lines are those which begins with a valid MOSEK parameter
name such as MSK IPAR INTPNT BASIS. Immediately after parameter name follows the new value for
the parameter. All the MOSEK parameter names are listed in Appendix H.
A.3.1 Using the parameter le
The parameter le can be given any name, but let us assume it has the name mosek.par. If MOSEK
should use the parameter settings in that le, then -p mosek.par should be on the command line
when MOSEK is invoked. An example of such a command line is
mosek -p mosek.par afiro.mps
108 APPENDIX A. MOSEK COMMAND LINE TOOL REFERENCE
Appendix B
The MPS le format
MOSEK supports the standard MPS format with some extensions. For a detailed description of the
MPS format the book by Nazareth [18] is a good reference.
B.1 The MPS le format
The version of the MPS format supported by MOSEK allows specication of an optimization problem
on the form
l
c
Ax + q(x) u
c
,
l
x
x u
x
,
x (,
x
J
integer,
(B.1)
where
x R
n
is the vector of decision variables.
A R
mn
is the constraint matrix.
l
c
R
m
is the lower limit on the activity for the constraints.
u
c
R
m
is the upper limit on the activity for the constraints.
l
x
R
n
is the lower limit on the activity for the variables.
u
x
R
n
is the upper limit on the activity for the variables.
q : R
n
R is a vector of quadratic functions. Hence,
q
i
(x) = 1/2x
T
Q
i
x
where it is assumed that
Q
i
= (Q
i
)
T
. (B.2)
Please note the explicit 1/2 in the quadratic term and that Q
i
is required to be symmetric.
( is a convex cone.
109
110 APPENDIX B. THE MPS FILE FORMAT
1, 2, . . . , n is an index set of the integer-constrained variables.
An MPS le with one row and one column can be illustrated like this:
* 1 2 3 4 5 6
*23456789012345678901234567890123456789012345678901234567890
NAME [name]
OBJSENSE
[objsense]
OBJNAME
[objname]
ROWS
? [cname1]
COLUMNS
[vname1] [cname1] [value1] [vname3] [value2]
RHS
[name] [cname1] [value1] [cname2] [value2]
RANGES
[name] [cname1] [value1] [cname2] [value2]
QSECTION [cname1]
[vname1] [vname2] [value1] [vname3] [value2]
BOUNDS
?? [name] [vname1] [value1]
CSECTION [kname1] [value1] [ktype]
[vname1]
ENDATA
Here the names in capitals are keywords of the MPS format and names in brackets are custom dened
names or values. A couple of notes on the structure:
Fields: All items surrounded by brackets appear in elds. The elds named valueN are numerical
values. Hence, they must have the format
[+|-]XXXXXXX.XXXXXX[[e|E][+|-]XXX]
where
X = [0|1|2|3|4|5|6|7|8|9].
Sections: The MPS le consists of several sections where the names in capitals indicate the beginning
of a new section. For example, COLUMNS denotes the beginning of the columns section.
Comments: Lines starting with an * are comment lines and are ignored by MOSEK.
Keys: The question marks represent keys to be specied later.
Extensions: The sections QSECTION and CSECTION are MOSEK specic extensions of the MPS format.
The standard MPS format is a xed format, i.e. everything in the MPS le must be within certain
xed positions. MOSEK also supports a free format. See Section B.5 for details.
B.1. THE MPS FILE FORMAT 111
B.1.1 An example
A concrete example of a MPS le is presented below:
NAME EXAMPLE
OBJSENSE
MIN
ROWS
N obj
L c1
L c2
L c3
L c4
COLUMNS
x1 obj -10.0 c1 0.7
x1 c2 0.5 c3 1.0
x1 c4 0.1
x2 obj -9.0 c1 1.0
x2 c2 0.8333333333 c3 0.66666667
x2 c4 0.25
RHS
rhs c1 630.0 c2 600.0
rhs c3 708.0 c4 135.0
ENDATA
Subsequently each individual section in the MPS format is discussed.
B.1.2 NAME
In this section a name ([name]) is assigned to the problem.
B.1.3 OBJSENSE (optional)
This is an optional section that can be used to specify the sense of the objective function. The OBJSENSE
section contains one line at most which can be one of the following
MIN
MINIMIZE
MAX
MAXIMIZE
It should be obvious what the implication is of each of these four lines.
B.1.4 OBJNAME (optional)
This is an optional section that can be used to specify the name of the row that is used as objective
function. The OBJNAME section contains one line at most which has the form
objname
objname should be a valid row name.
112 APPENDIX B. THE MPS FILE FORMAT
B.1.5 ROWS
A record in the ROWS section has the form
? [cname1]
where the requirements for the elds are as follows:
Field Starting Maximum Re- Description
position width quired
? 2 1 Yes Constraint key
[cname1] 5 8 Yes Constraint name
Hence, in this section each constraint is assigned an unique name denoted by [cname1]. Please note
that [cname1] starts in position 5 and the eld can be at most 8 characters wide. An initial key (?)
must be present to specify the type of the constraint. The key can have the values E, G, L, or N whith
ther following interpretation:
Constraint l
c
i
u
c
i
type
E nite l
c
i
G nite
L nite
N
In the MPS format an objective vector is not specied explicitly, but one of the constraints having the
key N will be used as the objective vector c. In general, if multiple N type constraints are specied,
then the rst will be used as the objective vector c.
B.1.6 COLUMNS
In this section the elements of A are specied using one or more records having the form
[vname1] [cname1] [value1] [cname2] [value2]
where the requirements for each eld are as follows:
Field Starting Maximum Re- Description
position width quired
[vname1] 5 8 Yes Variable name
[cname1] 15 8 Yes Constraint name
[value1] 25 12 Yes Numerical value
[cname2] 40 8 No Constraint name
[value2] 50 12 No Numerical value
Hence, a record species one or two elements a
ij
of A using the principle that [vname1] and [cname1]
determines j and i respectively. Please note that [cname1] must be a constraint name specied in
the ROWS section. Finally, [value1] denotes the numerical value of a
ij
. Another optional element
is specied by [cname2], and [value2] for the variable specied by [vname1]. Some important
comments are:
B.1. THE MPS FILE FORMAT 113
All elements belonging to one variable must be grouped together.
Zero elements of A should not be specied.
At least one element for each variable should be specied.
B.1.7 RHS (optional)
A record in this section has the format
[name] [cname1] [value1] [cname2] [value2]
where the requirements for each eld are as follows:
Field Starting Maximum Re- Description
position width quired
[name] 5 8 Yes Name of the RHS vector
[cname1] 15 8 Yes Constraint name
[value1] 25 12 Yes Numerical value
[cname2] 40 8 No Constraint name
[value2] 50 12 No Numerical value
The interpretation of a record is that [name] is the name of the RHS vector to be specied. In general,
several vectors can be specied. [cname1] denotes a constraint name previously specied in the ROWS
section. Now, assume that this name has been assigned to the ith constraint and v
1
denotes the value
specied by [value1], then the interpretation of v
1
is:
Constraint l
c
i
u
c
i
type
E v
1
v
1
G v
1
L v
1
N
An optional second element is specied by [cname2] and [value2] and is interpreted in the same way.
Please note that it is not necessary to specify zero elements, because elements are assumed to be zero.
B.1.8 RANGES (optional)
A record in this section has the form
[name] [cname1] [value1] [cname2] [value2]
where the requirements for each elds are as follows:
114 APPENDIX B. THE MPS FILE FORMAT
Field Starting Maximum Re- Description
position width quired
[name] 5 8 Yes Name of the RANGE vector
[cname1] 15 8 Yes Constraint name
[value1] 25 12 Yes Numerical value
[cname2] 40 8 No Constraint name
[value2] 50 12 No Numerical value
The records in this section are used to modify the bound vectors for the constraints, i.e. the values
in l
c
and u
c
. A record has the following interpretation: [name] is the name of the RANGE vector anhd
[cname1] is a valid constraint name. Assume that [cname1] is assigned to the ith constraint and let
v
1
be the value specied by [value1], then a record has the interpretation:
Constraint Sign of v
1
l
c
i
u
c
i
type
E - u
c
i
+ v
1
E + l
c
i
+ v
1
G - or + l
c
i
+[v
1
[
L - or + u
c
i
[v
1
[
N
B.1.9 QSECTION (optional)
Within the QSECTION the label [cname1] must be a constraint name previously specied in the ROWS
section. The label [cname1] denotes the constraint to which the quadratic term belongs. A record in
the QSECTION has the form
[vname1] [vname2] [value1] [vname3] [value2]
where the requirements for each eld are:
Field Starting Maximum Re- Description
position width quired
[vname1] 5 8 Yes Variable name
[vname2] 15 8 Yes Variable name
[value1] 25 12 Yes Numerical value
[vname3] 40 8 No Variable name
[value2] 50 12 No Numerical value
A record species one or two elements in the lower triangular part of the Q
i
matrix where [cname1]
species the i. Hence, if the names [vname1] and [vname2] have been assigned to the kth and jth
variable, then Q
i
kj
is assigned the value given by [value1] An optional second element is specied in
the same way by the elds [vname1], [vname3], and [value2].
The example
minimize x
2
+ 0.5(2x
2
1
2x
1
x
3
+ 0.2x
2
2
+ 2x
2
3
)
subject to x
1
+ x
2
+ x
3
1,
x 0
has the following MPS le representation
B.1. THE MPS FILE FORMAT 115
NAME qoexp
ROWS
N obj
G c1
COLUMNS
x1 c1 1
x2 obj -1
x2 c1 1
x3 c1 1
RHS
rhs c1 1
QSECTION obj
x1 x1 2
x1 x3 -1
x2 x2 0.2
x3 x3 2
ENDATA
Regarding the QSECTIONs please note that:
Only one QSECTION is allowed for each constraint.
The QSECTIONs can appear in an arbitrary order after the COLUMNS section.
All variable names occurring in the QSECTION must already be specied in the COLUMNS section.
All entries specied in a QSECTION are assumed to belong to the lower triangular part of the
quadratic term of Q.
B.1.10 BOUNDS (optional)
In the BOUNDS section changes to the default bounds vectors l
x
and u
x
are specied. The default
bounds vectors are l
x
= 0 and u
x
= . Moreover, it is possible to specify several sets of bound
vectors. A record in this section has the form
?? [name] [vname1] [value1]
where the requirements for each eld are:
Field Starting Maximum Re- Description
position width quired
?? 2 2 Yes Bound key
[name] 5 8 Yes Name of the BOUNDS vector
[vname1] 15 8 Yes Variable name
[value1] 25 12 No Variable name
Hence, a record in the BOUNDS section has the following interpretation: [name] is the name of the
bound vector and [vname1] is the name of the variable which bounds are modied by the record. ??
and [value1] are used to modify the bound vectors according to the following table:
116 APPENDIX B. THE MPS FILE FORMAT
?? l
x
j
u
x
j
Made integer
(added to )
FR No
FX v
1
v
1
No
LO v
1
unchanged No
MI unchanged No
PL unchanged No
UP unchanged v
1
No
BV 0 1 Yes
LI v
1
| Yes
UI unchanged v
1
| Yes
v
1
is the value specied by [value1].
B.1.11 CSECTION (optional)
The purpose of the CSECTION is to specify the constraint
x (.
in (B.1).
It is assumed that ( satises the following requirements. Let
x
t
R
n
t
, t = 1, . . . , k
be vectors comprised of parts of the decision variables x so that each decision variable is a member of
exactly one vector x
t
, for example
x
1
=
_
_
x
1
x
4
x
7
_
_
and x
2
=
_
_
x
6
x
5
x
3
x
2
_
_
.
Next dene
( :=
_
x R
n
: x
t
(
t
, t = 1, . . . , k
_
where (
t
must have one of the following forms
R set:
(
t
= x R
n
t
.
Quadratic cone:
(
t
=
_
_
_
x R
n
t
: x
1
_
n
t
j=2
x
2
j
_
_
_
. (B.3)
Rotated quadratic cone:
(
t
=
_
_
_
x R
n
t
: 2x
1
x
2
n
t
j=3
x
2
j
, x
1
, x
2
0
_
_
_
. (B.4)
B.1. THE MPS FILE FORMAT 117
In general, only quadratic and rotated quadratic cones are specied in the MPS le whereas membership
of the R set is not. If a variable is not a member of any other cone then it is assumed to be a member
of an R cone.
Next, let us study an example. Assume that the quadratic cone
x
4
_
x
2
5
+ x
2
8
(B.5)
and the rotated quadratic cone
2x
3
x
7
x
2
1
+ x
2
8
, x
3
, x
7
0, (B.6)
should be specied in the MPS le. One CSECTION is required for each cone and they are specied as
follows:
* 1 2 3 4 5 6
*23456789012345678901234567890123456789012345678901234567890
CSECTION konea 0.0 QUAD
x4
x5
x8
CSECTION koneb 0.0 RQUAD
x7
x3
x1
x0
This rst CSECTION species the cone (B.5) which is given the name konea. This is a quadratic cone
which is specied by the keyword QUAD in the CSECTION header. The 0.0 value in the CSECTION header
is not used by the QUAD cone.
The second CSECTION species the rotated quadratic cone (B.6). Please note the keyword RQUAD in
the CSECTION which is used to specify that the cone is a rotated quadratic cone instead of a quadratic
cone. The 0.0 value in the CSECTION header is not used by the RQUAD cone.
In general, a CSECTION header has the format
CSECTION [kname1] [value1] [ktype]
where the requirement for each eld are as follows:
Field Starting Maximum Re- Description
position width quired
[kname1] 5 8 Yes Name of the cone
[value1] 15 12 No Cone parameter
[ktype] 25 Yes Type of the cone.
The possible cone type keys are:
Cone type key Members Interpretation.
QUAD 1 Quadratic cone i.e. (B.3).
RQUAD 2 Rotated quadratic cone i.e. (B.4).
Please note that a quadratic cone must have at least one member whereas a rotated quadratic cone
must have at least two members. A record in the CSECTION has the format
118 APPENDIX B. THE MPS FILE FORMAT
[vname1]
where the requirements for each eld are
Field Starting Maximum Re- Description
position width quired
[vname1] 2 8 Yes A valid variable name
The most important restriction with respect to the CSECTION is that a variable must occur in only
one CSECTION.
B.1.12 ENDATA
This keyword denotes the end of the MPS le.
B.2 Integer variables
Using special bound keys in the BOUNDS section it is possible to specify that some or all of the variables
should be integer-constrained i.e. be members of . However, an alternative method is available.
This method is available only for backward compability and we recommend that it is not used.
This method requires that markers are placed in the COLUMNS section as in the example:
COLUMNS
x1 obj -10.0 c1 0.7
x1 c2 0.5 c3 1.0
x1 c4 0.1
* Start of integer-constrained variables.
MARK000 MARKER INTORG
x2 obj -9.0 c1 1.0
x2 c2 0.8333333333 c3 0.66666667
x2 c4 0.25
x3 obj 1.0 c6 2.0
MARK001 MARKER INTEND
* End of integer-constrained variables.
Please note that special marker lines are used to indicate the start and the end of the integer
variables. Furthermore be aware of the following
IMPORTANT: All variables between the markers are assigned a default lower bound of 0 and
a default upper bound of 1. This may not be what is intended. If it is not intended, the
correct bounds should be dened in the BOUNDS section of the MPS formatted le.
MOSEK ignores eld 1, i.e. MARK0001 and MARK001, however, other optimization systems require
them.
Field 2, i.e. MARKER, must be specied including the single quotes. This implies that no row
can be assigned the name MARKER.
B.3. GENERAL LIMITATIONS 119
Field 3 is ignored and should be left blank.
Field 4, i.e. INTORG and INTEND, must be specied.
It is possible to specify several such integer marker sections within the COLUMNS section.
B.3 General limitations
An MPS le should be an ASCII le.
B.4 Interpretation of the MPS format
Several issues related to the MPS format are not well-dened by the industry standard. However,
MOSEK uses the following interpretation:
If a matrix element in the COLUMNS section is specied multiple times, then the multiple entries
are added together.
If a matrix element in a QSECTION section is specied multiple times, then the multiple entries
are added together.
B.5 The free MPS format
MOSEK supports a free format variation of the MPS format. The free format is similar to the MPS le
format but less restrictive, e.g. it allows longer names. However, it also presents two main limitations:
By default a line in the MPS le must not contain more than 1024 characters. However, by mod-
ifying the parameter MSK IPAR READ MPS WIDTH an arbitrary large line width will be accepted.
A name must not contain any blanks.
To use the free MPS format instead of the default MPS format the MOSEK parameter MSK IPAR READ MPS FORMAT
should be changed.
120 APPENDIX B. THE MPS FILE FORMAT
Appendix C
The LP le format
MOSEK supports the LP le format with some extensions i.e. MOSEK can read and write LP
formatted les.
C.1 A warning
The LP format is not a well-dened standard and hence dierent optimization packages may interpre-
tate a specic LP formatted le dierently.
C.2 The LP le format
The LP le format can specify problems on the form
minimize/maximize c
T
x +
1
2
q
o
(x)
subject to l
c
Ax +
1
2
q(x) u
c
,
l
x
x u
x
,
x
J
integer,
where
x R
n
is the vector of decision variables.
c R
n
is the linear term in the objective.
q
o
: R
n
R is the quadratic term in the objective where
q
o
(x) = x
T
Q
o
x
and it is assumed that
Q
o
= (Q
o
)
T
. (C.1)
A R
mn
is the constraint matrix.
l
c
R
m
is the lower limit on the activity for the constraints.
121
122 APPENDIX C. THE LP FILE FORMAT
u
c
R
m
is the upper limit on the activity for the constraints.
l
x
R
n
is the lower limit on the activity for the variables.
u
x
R
n
is the upper limit on the activity for the variables.
q : R
n
R is a vector of quadratic functions. Hence,
q
i
(x) = x
T
Q
i
x
where it is assumed that
Q
i
= (Q
i
)
T
. (C.2)
1, 2, . . . , n is an index set of the integer constrained variables.
C.2.1 The sections
An LP formatted le contains a number of sections specifying the objective, constraints, variable
bounds, and variable types. The section keywords may be any mix of upper and lower case letters.
C.2.1.1 The objective
The rst section beginning with one of the keywords
max
maximum
maximize
min
minimum
minimize
denes the objective sense and the objective function, i.e.
c
T
x +
1
2
x
T
Q
o
x.
The objective may be given a name by writing
myname:
before the expressions. If no name is given, then the objective is named obj.
The objective function contains linear and quadratic terms. The linear terms are written as in the
example
4 x1 + x2 - 0.1 x3
and so forth. The quadratic terms are written in square brackets ([ ]) and are either squared or
multiplied as in the examples
x1 ^ 2
and
C.2. THE LP FILE FORMAT 123
x1 * x2
There may be zero or more pairs of brackets containing quadratic expressions.
An example of an objective section is:
minimize
myobj: 4 x1 + x2 - 0.1 x3 + [ x1 ^ 2 + 2.1 x1 * x2 ]/2
Please note that the quadratic expressions are multiplied with
1
2
, so that the above expression means
minimize 4x
1
+ x
2
0.1 x
3
+
1
2
(x
2
1
+ 2.1 x
1
x
2
)
If the same variable occurs more than once in the linear part, the coecients are added, so that 4 x1
+ 2 x1 is equivalent to 6 x1. In the quadratic expressions x1 * x2 is equivalent to x2 * x1 and as
in the linear part , if the same variables multiplied or squared occur several times their coecients are
added.
C.2.1.2 The constraints
The second section beginning with one of the keywords
subj to
subject to
s.t.
st
denes the linear constraint matrix (A) and the quadratic matrices (Q
i
).
A constraint contains a name (optional), expressions adhering to the same rules as in the objective
and a bound:
subject to
con1: x1 + x2 + [ x3 ^ 2 ]/2 <= 5.1
The bound type (here <=) may be any of <, <=, =, >, >= (< and <= mean the same), and the bound
may be any number.
In the standard LP format it is not possible to dene more than one bound, but MOSEK supports
dening ranged constraints by using double-colon (::) instead of a single-colon (:) after the
constraint name, i.e.
5 x
1
+ x
2
5 (C.3)
may be written as
con:: -5 < x_1 + x_2 < 5
By default MOSEK writes ranged constraints this way.
If the les must adhere to the LP standard, ranged constraints must either be split into upper
bounded and lower bounded constraints or be written as en equality with a slack variable. For example
the expression (C.3) may be written as
x
1
+ x
2
sl
1
= 0, 5 sl
1
5.
124 APPENDIX C. THE LP FILE FORMAT
C.2.1.3 Bounds
Bounds on the variables can be specied in the bound section beginning with one of the keywords
bound
bounds
The bounds section is optional but should, if present, follow the subject to section. All variables
listed in the bounds section must occur in either the objective or a constraint.
The default lower and upper bounds are 0 and +. A variable may be declared free with the key-
word free, which means that the lower bound is and the upper bound is +. Furthermore it may
be assigned a nite lower and upper bound. The bound denitions for a given variable may be written in
one or two lines, and bounds can be any number or (written as +inf/-inf/+infinity/-infinity)
as in the example
bounds
x1 free
x2 <= 5
0.1 <= x2
x3 = 42
2 <= x4 < +inf
C.2.1.4 Variable types
The nal two sections are optional and must begin with one of the keywords
bin
binaries
binary
and
gen
general
Under general all integer variables are listed, and under binary all binary (integer variables with
bounds 0 and 1) are listed:
general
x1 x2
binary
x3 x4
Again, all variables listed in the binary or general sections must occur in either the objective or a
constraint.
C.2.1.5 Terminating section
Finally, an LP formatted le must be terminated with the keyword
end
C.2. THE LP FILE FORMAT 125
C.2.1.6 An example
A simple example of an LP le with two variables, four constraints and one integer variable is:
minimize
-10 x1 -9 x2
subject to
0.7 x1 + x2 <= 630
0.5 x1 + 0.833 x2 <= 600
x1 + 0.667 x2 <= 708
0.1 x1 + 0.025 x2 <= 135
bounds
10 <= x1
x1 <= +inf
20 <= x2 <= 500
general
x1
end
C.2.2 LP format peculiarities
C.2.2.1 Comments
Anything on a line after a is ignored and is treated as a comment.
C.2.2.2 Names
A name for an objective, a constraint or a variable may contain the letters a-z, A-Z, the digits 0-9 and
the characters
!"#$%&()/,.;?@_{}|~
The rst character in a name must not be a number, a period or the letter e or E. Keywords must
not be used as names.
It is strongly recommended not to use double quotes (") in names.
C.2.2.3 Variable bounds
Specifying several upper or lower bounds on one variable is possible but MOSEK uses only the tightest
bounds. If a variable is xed (with =), then it is considered the tightest bound.
C.2.2.4 MOSEK specic extensions to the LP format
Some optimization software packages employ a more strict denition of the LP format that the one
used by MOSEK. The limitations imposed by the strict LP format are the following:
Quadratic terms in the constraints are not allowed.
Names can be only 16 characters long.
Lines must not exceed 255 characters in length.
126 APPENDIX C. THE LP FILE FORMAT
If an LP formatted le created by MOSEK should satises the strict denition, then the parameter
MSK_IPAR_WRITE_LP_STRICT_FORMAT
should be set; note, however, that some problems cannot be written correctly as a strict LP formatted
le. For instance, all names are truncated to 16 characters and hence they may loose their uniqueness
and change the problem.
To get around some of the inconveniences converting from other problem formats, MOSEK allows
lines to contain 1024 characters and names may have any length (shorter than the 1024 characters).
Internally in MOSEK names may contain any (printable) character, many of which cannot be used
in LP names. Setting the parameters
MSK_IPAR_READ_LP_QUOTED_NAMES
and
MSK_IPAR_WRITE_LP_QUOTED_NAMES
allows MOSEK to use quoted names. The rst parameter tells MOSEK to remove quotes from quoted
names e.g, "x1", when reading LP formatted les. The second parameter tells MOSEK to put quotes
around any semi-illegal name (names beginning with a number or a period) and fully illegal name
(containing illegal characters). As double quote is a legal character in the LP format, quoting semi-
illegal names makes them legal in the pure LP format as long as they are still shorter than 16 characters.
Fully illegal names are still illegal in a pure LP le.
C.2.3 The strict LP format
The LP format is not a formal standard and dierent vendors have slightly dierent interpretations of
the LP format. To make MOSEKs denition of the LP format more compatible whith the denitions
of other vendors use the paramter setting
MSK_IPAR_WRITE_LP_STRICT_FORMAT MSK_ON
This setting may lead to truncation of some names and hence to an invalid LP le. The simple solution
to this problem is to use the paramter setting
MSK_IPAR_WRITE_GENERIC_NAMES MSK_ON
which will cause all names to be renamed systematically in the output le.
C.2.4 Formatting of an LP le
A few parameters control the visual formatting of LP les written by MOSEK in order to make it
easier to read the les. These parameters are
MSK_IPAR_WRITE_LP_LINE_WIDTH
MSK_IPAR_WRITE_LP_TERMS_PER_LINE
The rst parameter sets the maximum number of characters on a single line. The default value is 80
corresponding roughly to the width of a standard text document.
The second parameter sets the maximum number of terms per line; a term means a sign, a coe-
cient, and a name (for example + 42 elephants). The default value is 0, meaning that there is no
maximum.
C.2. THE LP FILE FORMAT 127
C.2.4.1 Speeding up le reading
If the input le should be read as fast as possible using the least amount of memory, then it is important
to tell MOSEK how many non-zeros, variables and constraints the problem contains. These values can
be set using the parameters
MSK_IPAR_READ_CON
MSK_IPAR_READ_VAR
MSK_IPAR_READ_ANZ
MSK_IPAR_READ_QNZ
C.2.4.2 Unnamed constraints
Reading and writing an LP le with MOSEK may change it supercially. If an LP le contains
unnamed constraints or objective these are given their generic names when the le is read (however
unnamed constraints in MOSEK are written without names).
128 APPENDIX C. THE LP FILE FORMAT
Appendix D
The OPF format
The Optimization Problem Format (OPF) is an alternative to LP and MPS les for specifying opti-
mization problems. It is row-oriented, inspired by the CPLEX LP format.
Apart from containing objective, constraints, bounds etc. it may contain complete or partial
solutions, comments and extra information relevant for solving the problem. It is designed to be easily
read and modied by hand and to be forward compatible with possible future extensions.
D.1 Intended use
The OPF le format is meant to replace several other les:
The LP le format. Any problem that can be written as an LP le can be written as an OPF le
to; furthermore it naturally accommodates ranged constraints and variables as well as arbitrary
characters in names, xed expressions in the objective, empty constraints, and conic constraints.
Parameter les. It is possible to specify integer, double and string parameters along with the
problem (or in a separate OPF le).
Solution les. It is possible to store a full or a partial solution in an OPF le and later reload it.
D.2 The le format
The format uses tags to structure data. A simple example with the basic sections may look like this:
[comment]
This is a comment. You may write almost anything here...
[/comment]
# This is a single-line comment.
[objective min myobj]
x + 3 y + x^2 + 3 y^2 + z + 1
[/objective]
129
130 APPENDIX D. THE OPF FORMAT
[constraints]
[con con01] 4 <= x + y [/con]
[/constraints]
[bounds]
[b] -10 <= x,y <= 10 [/b]
[cone quad] x,y,z [/cone]
[/bounds]
A scope is opened by a tag of the form [tag] and closed by a tag of the form [/tag]. An opening tag
may accept a list of unnamed and named arguments, for examples
[tag value] tag with one unnamed argument [/tag]
[tag arg=value] tag with one named argument in quotes [/tag]
Unnamed arguments are identied by their order, while named arguments may appear in any order,
but never before an unnamed argument. The value can be a quoted, single-quoted or double-quoted
text string, i.e.
[tag value] single-quoted value [/tag]
[tag arg=value] single-quoted value [/tag]
[tag "value"] double-quoted value [/tag]
[tag arg="value"] double-quoted value [/tag]
D.2.1 Sections
The recognized tags are
[comment] A comment section. This can contain almost any text: Between single quotes () or
double quotes (") any text may appear. Outside quotes the markup characters ([ and ]) must
be prexed by backslashes. Both single and double quotes may appear alone or inside a pair of
quotes if it is prexed by a backslash.
[objective] The objective function: This accepts one or two parameters, where the rst one
(in the above example min) is either min or max (regardless of case) and denes the objective
sense, and the second one (above myobj), if present, is the objective name. The section may
contain linear and quadratic expressions.
If several objectives are specied, all but the last are ignored.
[constraints] This does not directly contain any data, but may contain the subsection con
dening a linear constraint.
[con] denes a single constraint; if an argument is present ([con NAME]) this is used as the name
of the constraint, otherwise it is given a null-name. The section contains a constraint denition
written as linear and quadratic expressions with a lower bound, an upper bound, with both or
with an equality. Examples:
D.2. THE FILE FORMAT 131
[constraints]
[con con1] 0 <= x + y [/con]
[con con2] 0 >= x + y [/con]
[con con3] 0 <= x + y <= 10 [/con]
[con con4] x + y = 10 [/con]
[/constraints]
Constraint names are unique. If a constraint is apecied which has the same name as a previously
dened constraint, the new constraint replaces the existing one.
[bounds] This does not directly contain any data, but may contain the subsections b (linear
bounds on variables) and cone (quadratic cone).
[b]. Bound denition on one or several variables separated by comma (,). An upper or
lower bound on a variable replaces any earlier dened bound on that variable. If only one
bound (upper or lower) is given only this bound is replaced. This means that upper and
lower bounds can be specied separately. So the OPF bound denition:
[b] x,y >= -10 [/b]
[b] x,y <= 10 [/b]
results in the bound
10 x, y 10. (D.1)
[cone]. Currently, the supported cones are the quadratic cone and the rotated quadratic
cone A conic constraint is dened as a set of variables which belongs to a single unique
cone.
A quadratic cone of n variables x
1
, . . . , x
n
denes a constraint of the form
x
2
1
>
n
i=2
x
2
i
.
A rotated quadratic cone of n variables x
1
, . . . , x
n
denes a constraint of the form
x
1
x
2
>
n
i=3
x
2
i
.
A [bounds]-section example:
[bounds]
[b] 0 <= x,y <= 10 [/b] # ranged bound
[b] 10 >= x,y >= 0 [/b] # ranged bound
[b] 0 <= x,y <= inf [/b] # using inf
[b] x,y free [/b] # free variables
# Let (x,y,z,w) belong to the cone K
[cone quad] x,y,z,w [/cone] # quadratic cone
[cone rquad] x,y,z,w [/cone] # rotated quadratic cone
[/bounds]
132 APPENDIX D. THE OPF FORMAT
By default all variables are free.
[variables] This denes an ordering of variables as they should appear in the problem. This
is simply a space-separated list of variable names.
[integer] This contains a space-separated list of variables and denes the constraint that the
listed variables must be integer values.
[hints] This may contain only non-essential data; for example estimates of the number of
variables, constraints and non-zeros. Placed before all other sections containing data this may
reduce the time spent reading the le.
In the hints section, any subsection which is not recognized by MOSEK is simply ignored. In
this section a hint in a subsection is dened as follows:
[hint ITEM] value [/hint]
where ITEM may be replaced by numvar (number of variables), numcon (number of linear/quadratic
constraints), numanz (number if linear non-zeros in constraints) and numqnz (number of quadratic
non-zeros in constraints).
[solutions] This section can contain a number of full or partial solutions to a problem, each
inside a [solution]-section. The syntax is
[solution SOLTYPE status=STATUS]...[/solution]
where SOLTYPE is one of the strings
interior, a non-basic solution,
basic, a basic solution,
integer, an integer solution,
and STATUS is one of the strings
UNKNOWN,
OPTIMAL,
INTEGER OPTIMAL,
PRIM FEAS,
DUAL FEAS,
PRIM AND DUAL FEAS,
NEAR OPTIMAL,
NEAR PRIM FEAS,
NEAR DUAL FEAS,
NEAR PRIM AND DUAL FEAS,
PRIM INFEAS CER,
DUAL INFEAS CER,
D.2. THE FILE FORMAT 133
NEAR PRIM INFEAS CER,
NEAR DUAL INFEAS CER,
NEAR INTEGER OPTIMAL.
Most of these values are irrelevant for input solutions; when constructing a solution for simplex
hot-start or an initial solution for a mixed integer problem the safe setting is UNKNOWN.
A [solution]-section contains [con] and [var] sections. Each [con] and [var] section denes
solution values for a single variable or constraint, each value written as
KEYWORD=value
where KEYWORD denes a solution item and value denes its value. Allowed keywords are as
follows:
sk. The status of the item, where the value is one of the following strings:
LOW, the item is on its lower bound.
UPR, the item is on its upper bound.
FIX, it is a xed item.
BAS, the item is in the basis.
SUPBAS, the item is super basic.
UNK, the status is unknown.
INF, the item is outside its bounds (infeasible).
lvl Denes the level of the item.
sl Denes the level of the variable associated with its lower bound.
su Denes the level of the variable associated with its upper bound.
sn Denes the level of the variable associated with its cone.
y Denes the level of the corresponding dual variable (for constraints only).
A [var] section should always contain the items sk and lvl, and optionally sl, su and sn.
A [con] section should always contain sk and lvl, and optionally sl, su and y.
An example of a solution section
[solution basic status=UNKNOWN]
[var x0] sk=LOW lvl=5.0 [/var]
[var x1] sk=UPR lvl=10.0 [/var]
[var x2] sk=SUPBAS lvl=2.0 sl=1.5 su=0.0 [/var]
[con c0] sk=LOW lvl=3.0 y=0.0 [/con]
[con c0] sk=UPR lvl=0.0 y=5.0 [/con]
[/solution]
[vendor] This contains solver/vendor specic data. It accepts one argument, which is a vendor
ID for MOSEK the ID is simply mosek and the section contains the subsection parameters
dening solver parameters. When reading a vendor section, any unknown vendor can be safely
ignored. This is described later.
134 APPENDIX D. THE OPF FORMAT
Comments using the # may appear anywhere in the le. Between the # and the following line-break
any text may be written, including markup characters.
D.2.2 Numbers
Numbers, when used for parameter values or coecients, are written in the usual way by the printf
function. That is, they may be prexed by a sign (+ or -) and may contain an integer part, decimal
part and an exponent. The decimal point is always . (a dot). Some examples are
1
1.0
.0
1.
1e10
1e+10
1e-10
Some invalid examples are
e10 # invalid, must contain either integer or decimal part
. # invalid
.e10 # invalid
More formally, the following standard regular expression describes numbers as used:
[+|-]?([0-9]+[.][0-9]*|[.][0-9]+)([eE][+|-]?[0-9]+)?
D.2.3 Names
Variable names, constraint names and objective name may contain arbitrary characters, which in some
cases must be enclosed by quotes (single or double) that in turn must be preceded by a backslash.
Unquoted names must begin with a letter (a-z or A-Z) and contain only the following characters: the
letters a-z and A-Z, the digits 0-9, braces ( and ) and underscore ( ).
Some examples of legal names:
an_unqouted_name
another_name{123}
single qouted name
"double qouted name"
"name with \"qoute\" in it"
"name with []s in it"
D.3 Parameters section
In the vendor section solver parameters are dened inside the parameters subsection. Each parameter
is written as
[p PARAMETER_NAME] value [/p]
D.4. WRITING OPF FILES FROM MOSEK 135
where PARAMETER NAME is replaced by a MOSEK parameter name, usually of the form MSK IPAR ...,
MSK DPAR ... or MSK SPAR ..., and the value is replaced by the value of that parameter; both integer
values and named values may be used. Some simple examples are:
[vendor mosek]
[parameters]
[p MSK_IPAR_OPF_MAX_TERMS_PER_LINE] 10 [/p]
[p MSK_IPAR_OPF_WRITE_PARAMETERS] MSK_ON [/p]
[p MSK_DPAR_DATA_TOL_BOUND_INF] 1.0e18 [/p]
[/parameters]
[/vendor]
D.4 Writing OPF les from MOSEK
To write an OPF le set the parameter MSK IPAR WRITE DATA FORMAT to MSK DATA FORMAT OP as this
ensures that OPF format is used. Then modify the following parameters to dene what the le should
contain:
MSK IPAR OPF WRITE HEADER, include a small header with comments.
MSK IPAR OPF WRITE HINTS, include hints about the size of the problem.
MSK IPAR OPF WRITE PROBLEM, include the problem itself objective, constraints and bounds.
MSK IPAR OPF WRITE SOLUTIONS, include solutions if they are dened. If this is o, no solutions
are included.
MSK IPAR OPF WRITE SOL BAS, include basic solution, if dened.
MSK IPAR OPF WRITE SOL ITG, include integer solution, if dened.
MSK IPAR OPF WRITE SOL ITR, include interior solution, if dened.
MSK IPAR OPF WRITE PARAMETERS, include all parameter settings.
D.5 Examples
This section contains a set of small examples written in OPF and describing how to formulate linear,
quadratic and conic problems.
D.5.1 Linear example lo1.opf
Consider the example:
minimize 10x
1
9x
2
,
subject to 7/10x
1
+ 1x
2
630,
1/2x
1
+ 5/6x
2
600,
1x
1
+ 2/3x
2
708,
1/10x
1
+ 1/4x
2
135,
x
1
, x
2
0.
(D.2)
136 APPENDIX D. THE OPF FORMAT
In the OPF format the example is displayed as shown below:
[comment]
Example lo1.mps converted to OPF.
[/ comment]
[hints]
# Give a hint about the size of the different elements in the problem.
# These need only be estimates , but in this case they are exact.
[hint NUMVAR] 2 [/hint]
[hint NUMCON] 4 [/hint]
[hint NUMANZ] 8 [/hint]
[/ hints]
[variables]
# All variables that will appear in the problem
x1 x2
[/ variables]
[objective minimize obj ]
- 10 x1 - 9 x2
[/ objective]
[constraints]
[con c1 ] 0.7 x1 + x2 <= 630 [/con]
[con c2 ] 0.5 x1 + 0.8333333333 x2 <= 600 [/con]
[con c3 ] x1 + 0.66666667 x2 <= 708 [/con]
[con c4 ] 0.1 x1 + 0.25 x2 <= 135 [/con]
[/ constraints]
[bounds]
# By default all variables are free. The following line will
# change this to all variables being nonnegative.
[b] 0 <= * [/b]
[/ bounds]
D.5.2 Quadratic example qo1.opf
An example of a quadratic optimization problem is
minimize x
2
1
+ 0.1x
2
2
+ x
2
3
x
1
x
3
x
2
subject to 1 x
1
+ x
2
+ x
3
,
x 0.
(D.3)
This can be formulated in opf as shown below.
[comment]
Example qo1.mps converted to OPF.
[/ comment]
[hints]
[hint NUMVAR] 3 [/hint]
[hint NUMCON] 1 [/hint]
[hint NUMANZ] 3 [/hint]
[/ hints]
D.5. EXAMPLES 137
[variables]
x1 x2 x3
[/ variables]
[objective minimize obj ]
# The quadratic terms are often multiplied by 1/2,
# but this is not required.
- x2 + 0.5 ( 2 x1 ^ 2 - 2 x3 * x1 + 0.2 x2 ^ 2 + 2 x3 ^ 2 )
[/ objective]
[constraints]
[con c1 ] 1 <= x1 + x2 + x3 [/con]
[/ constraints]
[bounds]
[b] 0 <= * [/b]
[/ bounds]
D.5.3 Conic quadratic example cqo1.opf
Consider the example:
minimize 1x
1
+ 2x
2
subject to 2x
3
+ 4x
4
= 5,
x
2
5
2x
1
x
3
,
x
2
6
2x
2
x
4
,
x
5
= 1,
x
6
= 1,
x 0.
(D.4)
Please note that the type of the cones is dened by the parameter to [cone ...]; the content of
the cone-section is the names of variables that belong to the cone.
[comment]
Example cqo1.mps converted to OPF.
[/ comment]
[hints]
[hint NUMVAR] 6 [/hint]
[hint NUMCON] 1 [/hint]
[hint NUMANZ] 2 [/hint]
[/ hints]
[variables]
x1 x2 x3 x4 x5 x6
[/ variables]
[objective minimize obj ]
x1 + 2 x2
[/ objective]
[constraints]
[con c1 ] 2 x3 + 4 x4 = 5 [/con]
[/ constraints]
138 APPENDIX D. THE OPF FORMAT
[bounds]
# We let all variables default to the positive orthant
[b] 0 <= * [/b]
# ... and change those that differ from the default.
[b] x5 ,x6 = 1 [/b]
# We define two rotated quadratic cones
# k1: 2 x1 * x3 >= x5^2
[cone rquad k1 ] x1, x3 , x5 [/cone]
# k2: 2 x2 * x4 >= x6^2
[cone rquad k2 ] x2, x4 , x6 [/cone]
[/ bounds]
D.5.4 Mixed integer example milo1.opf
Consider the mixed integer problem:
maximize x
0
+ 0.64x
1
subject to 50x
0
+ 31x
1
250,
3x
0
2x
1
4,
x
0
, x
1
0 and integer
(D.5)
This can be implemented in OPF with:
[comment]
Written by MOSEK version 5.0.0.7
Date 20-11-06
Time 14:42:24
[/ comment]
[hints]
[hint NUMVAR] 2 [/hint]
[hint NUMCON] 2 [/hint]
[hint NUMANZ] 4 [/hint]
[/ hints]
[variables disallow_new_variables]
x1 x2
[/ variables]
[objective maximize obj ]
x1 + 6.4e-1 x2
[/ objective]
[constraints]
[con c1 ] 5e+1 x1 + 3.1e+1 x2 <= 2.5e+2 [/con]
[con c2 ] -4 <= 3 x1 - 2 x2 [/con]
[/ constraints]
[bounds]
[b] 0 <= * [/b]
[/ bounds]
[integer]
D.5. EXAMPLES 139
x1 x2
[/ integer]
140 APPENDIX D. THE OPF FORMAT
Appendix E
The XML (OSiL) format
MOSEK can write data in the standard OSiL xml format. For a denition of the OSiL format please see
http://www.optimizationservices.org/. Only linear constraints (possibly with integer variables)
are supported. By default output les with the extension .xml are written in the OSiL format.
The parameter MSK IPAR WRITE XML MODE controls if the linear coecients in the A matrix are
written in row or column order.
141
142 APPENDIX E. THE XML (OSIL) FORMAT
Appendix F
The solution le format
MOSEK provides one or two solution les depending on the problem type and the optimizer used.
If a problem is optimized using the interior-point optimizer and no basis identication is required,
then a le named probname.sol is provided. probname is the name of the problem and .sol is
the le extension. If the problem is optimized using the simplex optimizer or basis identication is
performed, then a le named probname.bas is created presenting the optimal basis solution. Finally,
if the problem contains integer constrained variables then a le named probname.int is created. It
contains the integer solution.
F.1 The basic and interior solution les
In general both the interior-point and the basis solution les have the format:
NAME : <problem name>
PROBLEM STATUS : <status of the problem>
SOLUTION STATUS : <status of the solution>
OBJECTIVE NAME : <name of the objective function>
PRIMAL OBJECTIVE : <primal objective value corresponding to the solution>
DUAL OBJECTIVE : <dual objective value corresponding to the solution>
CONSTRAINTS
INDEX NAME AT ACTIVITY LOWER LIMIT UPPER LIMIT DUAL LOWER DUAL UPPER
? <name> ?? <a value> <a value> <a value> <a value> <a value>
VARIABLES
INDEX NAME AT ACTIVITY LOWER LIMIT UPPER LIMIT DUAL LOWER DUAL UPPER CONIC DUAL
? <name> ?? <a value> <a value> <a value> <a value> <a value> <a value>
In the example the elds ? and <> will be lled with problem and solution specic information. As
can be observed a solution report consists of three sections, i.e.
HEADER In this section, rst the name of the problem is listed and afterwards the problem and solution
statuses are shown. In this case the information shows that the problem is primal and dual
feasible and the solution is optimal. Next the primal and dual objective values are displayed.
CONSTRAINTS Subsequently in the constraint section the following information is listed for each con-
straint:
INDEX A sequential index assigned to the constraint by MOSEK.
NAME The name of the constraint assigned by the user.
AT The status of the constraint. In Table F.1 the possible values of the status keys and their
interpretation are shown.
143
144 APPENDIX F. THE SOLUTION FILE FORMAT
Status key Interpretation
UN Unknown status
BS Is basic
SB Is superbasic
LL Is at the lower limit (bound)
UL Is at the upper limit (bound)
EQ Lower limit is identical to upper limit
** Is infeasible i.e. the lower limit is
greater than the upper limit.
Table F.1: Status keys.
ACTIVITY Given the ith constraint on the form
l
c
i
n
j=1
a
ij
x
j
u
c
i
, (F.1)
then activity denote the quantity
n
j=1
a
ij
x
j
, where x