14. Sensitivity analysis


14.1. Introduction

Given an optimization problem it is often useful to obtain information about how the optimal objective value change when the problem parameters are perturbed. For instance assume that a bound represents a capacity of a machine. Now, it may be possible to expand the capacity for a certain cost and hence it worthwhile to know what the value of additional capacity is. This is precisely the type of questions sensitivity analysis deals with.

Analyzing how the optimal objective value changes when the problem data is changed is called sensitivity analysis.

14.2. Restrictions

Currently, sensitivity analysis is only available for continuous linear optimization problems. Moreover, MOSEK can only deal with perturbations in bounds or objective coefficients.

14.3. References

The book [17] discusses the classical sensitivity analysis in Chapter 10 whereas the book [12, Chapter 19] presents a modern introduction to sensitivity analysis. Finally, it is recommended to read the short paper [2] to avoid some of the pitfalls associated with sensitivity analysis.

14.4. Sensitivity analysis for linear problems

14.4.1. The optimal objective value function

Assume that we are given the problem

\begin{math}\nonumber{}\begin{array}{rclccccl}\nonumber{}z(l^{c},u^{c},l^{x},u^{x},c) & = & \mbox{minimize} &  &  & c^{T}x &  & \\\nonumber{} &  & \mbox{subject to} & l^{c} & \leq{} & Ax & \leq{} & u^{c},\\\nonumber{} &  &  &  &  & l^{x}\leq{}x\leq{}u^{x}, &  &\end{array}\end{math} (14.4.1)

and we want to know how the optimal objective value changes as [[MathCmd 432]] is perturbed. In order to answer this question then define the perturbed problem for [[MathCmd 432]] as follows

\begin{math}\nonumber{}\begin{array}{rclcccl}\nonumber{}f_{{l^{c}_{i}}}(\beta ) & = & \mbox{minimize} &  &  & c^{T}x & \\\nonumber{} &  & \mbox{subject to} & l^{c}+\beta e_{i} & \leq{} & Ax & \leq{}u^{c},\\\nonumber{} &  &  &  &  & l^{x}\leq{}x\leq{}u^{x}, &\end{array}\end{math} (14.4.2)

where [[MathCmd 435]] is the ith column of the identity matrix. The function

\begin{math}\nonumber{}f_{{l^{c}_{i}}}(\beta )\end{math} (14.4.3)

shows the optimal objective value as a function of [[MathCmd 437]]. Note that a change in [[MathCmd 437]] corresponds to a perturbation in [[MathCmd 439]] and hence (14.4.3) shows the optimal objective value as a function of [[MathCmd 432]].

It is possible to prove that the function (14.4.3) is a piecewise linear and convex function i.e. the function may look like the illustration in Figure 14.1.

Figure 14.1: The optimal value function [[MathCmd 441]]. Left: [[MathCmd 442]] is in the interior of linearity interval. Right: [[MathCmd 442]] is a breakpoint.

Clearly, if the function [[MathCmd 441]] does not change much when [[MathCmd 437]] is changed, then we can conclude that the optimal objective value is insensitive to changes in [[MathCmd 439]]. Therefore, we are interested in how [[MathCmd 441]] changes for small changes in [[MathCmd 437]]. Now define

\begin{math}\nonumber{}f'_{{l^{c}_{i}}}(0)\end{math} (14.4.4)

to be the so called shadow price related to [[MathCmd 432]]. The shadow price specifies how the objective value changes for small changes in [[MathCmd 437]] around zero. Moreover, we are interested in the so called linearity interval

\begin{math}\nonumber{}\beta \in{}[\beta _{1},\beta _{2}]\end{math} (14.4.5)

for which

\begin{math}\nonumber{}f'_{{l^{c}_{i}}}(\beta )=f'_{{l^{c}_{i}}}(0).\end{math} (14.4.6)

To summarize the sensitivity analysis provides a shadow price and the linearity interval in which the shadow price is constant.

The reader may have noticed that we are sloppy in the definition of the shadow price. The reason is that the shadow price is not defined in the right example in Figure 14.1 because the function [[MathCmd 441]] is not differentiable for [[MathCmd 442]]. However, in that case we can define a left and a right shadow price and a left and a right linearity interval.

In the above discussion we only discussed changes in [[MathCmd 439]]. We define the other optimal objective value functions as follows

\begin{math}\nonumber{}\begin{array}{rcll}\nonumber{}f_{{u^{c}_{i}}}(\beta ) & = & z(l^{c},u^{c}+\beta e_{i},l^{x},u^{x},c), & i=1,\ldots ,m,\\\nonumber{}f_{{l^{x}_{j}}}(\beta ) & = & z(l^{c},u^{c},l^{x}+\beta e_{j},u^{x},c), & j=1,\ldots ,n,\\\nonumber{}f_{{u^{x}_{j}}}(\beta ) & = & z(l^{c},u^{c},l^{x},u^{x}+\beta e_{j},c), & j=1,\ldots ,n,\\\nonumber{}f_{{c_{j}}}(\beta ) & = & z(l^{c},u^{c},l^{x},u^{x},c+\beta e_{j}), & j=1,\ldots ,n.\end{array}\end{math} (14.4.7)

Given these definitions it should be clear how linearity intervals and shadow prices are defined for the parameters [[MathCmd 458]] etc.

14.4.1.1. Equality constraints

In MOSEK a constraint can be specified as either an equality constraints or a ranged constraints. Suppose constraint i is an equality constraint. We then define the optimal value function for constraint i by

\begin{math}\nonumber{}f_{{e^{c}_{i}}}(\beta )=z(l^{c}+\beta e_{i},u^{c}+\beta e_{i},l^{x},u^{x},c)\end{math} (14.4.8)

Thus for a equality constraint the upper and lower bound (which are equal) are perturbed simultaneously. From the point of view of MOSEK sensitivity analysis a ranged constrain with [[MathCmd 460]] therefore differs from an equality constraint.

14.4.2. The basis type sensitivity analysis

The classical sensitivity analysis discussed in most textbooks about linear optimization, e.g. [17, Chapter 10], is based on an optimal basic solution or equivalently on an optimal basis. This method may produce misleading results [12, Chapter 19] but is computationally cheap. Therefore, and for historical reasons this method is available in MOSEK.

We will now briefly discuss the basis type sensitivity analysis. Given an optimal basic solution which provides a partition of variables into basic and non-basic variables then the basis type sensitivity analysis computes the linearity interval [[MathCmd 461]] such that the basis remains optimal for the perturbed problem. A shadow price associated with the linearity interval is also computed. However, it is well-known that an optimal basic solution may not be unique and therefore the result depends on the optimal basic solution employed in the sensitivity analysis. This implies that the computed interval is only a subset of the largest interval for which the shadow price is constant. Furthermore, the optimal objective value function might have a breakpoint for [[MathCmd 442]]. In this case the basis type sensitivity method will only provide a subset of either the left or the right linearity interval.

In summary the basis type sensitivity analysis is computationally cheap but does not provide complete information. Hence, the results of the basis type sensitivity analysis should be used with care.

14.4.3. The optimal partition type sensitivity analysis

Another method for computing the complete linearity interval is called the optimal partition type sensitivity analysis. The main drawback to the optimal partition type sensitivity analysis is it is computationally expensive. This type of sensitivity analysis is currently provided as an experimental feature in MOSEK.

Given optimal primal and dual solutions to (14.4.1) i.e. [[MathCmd 269]] and [[MathCmd 464]] then the optimal objective value is given by

\begin{math}\nonumber{}z^{*}:=c^{T}x^{*}.\end{math} (14.4.9)

The left and right shadow prices [[MathCmd 466]] and [[MathCmd 467]] for [[MathCmd 439]] is given by the pair of optimization problems

\begin{math}\nonumber{}\begin{array}{rclccl}\nonumber{}\sigma _{1} & = & \mbox{minimize} & e_{i}^{T}s_{l}^{c} &  & \\\nonumber{} &  & \mbox{subject to} & A^{T}(s_{l}^{c}-s_{u}^{c})+s_{l}^{x}-s_{u}^{x} & = & c,\\\nonumber{} &  &  & (l_{c})^{T}(s_{l}^{c})-(u_{c})^{T}(s_{u}^{c})+(l_{x})^{T}(s_{l}^{x})-(u_{x})^{T}(s_{u}^{x}) & = & z^{*},\\\nonumber{} &  &  & s_{l}^{c},s_{u}^{c},s_{l}^{c},s_{u}^{x}\geq{}0 &  &\end{array}\end{math} (14.4.10)

and

\begin{math}\nonumber{}\begin{array}{rclccl}\nonumber{}\sigma _{2} & = & \mbox{maximize} & e_{i}^{T}s_{l}^{c} &  & \\\nonumber{} &  & \mbox{subject to} & A^{T}(s_{l}^{c}-s_{u}^{c})+s_{l}^{x}-s_{u}^{x} & = & c,\\\nonumber{} &  &  & (l_{c})^{T}(s_{l}^{c})-(u_{c})^{T}(s_{u}^{c})+(l_{x})^{T}(s_{l}^{x})-(u_{x})^{T}(s_{u}^{x}) & = & z^{*},\\\nonumber{} &  &  & s_{l}^{c},s_{u}^{c},s_{l}^{c},s_{u}^{x}\geq{}0. &  &\end{array}\end{math} (14.4.11)

The above two optimization problems makes it easy to interpret-ate the shadow price. Indeed assume that [[MathCmd 464]] is an arbitrary optimal solution then it must hold

\begin{math}\nonumber{}(s_{l}^{c})_{i}^{*}\in{}[\sigma _{1},\sigma _{2}].\end{math} (14.4.12)

Next the linearity interval [[MathCmd 461]] for [[MathCmd 439]] is computed by solving the two optimization problems

\begin{math}\nonumber{}\begin{array}{lclccccl}\nonumber{}\beta _{1} & = & \mbox{minimize} &  &  & \beta  &  & \\\nonumber{} &  & \mbox{subject to} & l^{c}+\beta e_{i} & \leq{} & Ax & \leq{} & u^{c},\\\nonumber{} &  &  &  &  & c^{T}x-\sigma _{1}\beta  & = & z^{*},\\\nonumber{} &  &  &  &  & l^{x}\leq{}x\leq{}u^{x}, &  &\end{array}\end{math} (14.4.13)

and

\begin{math}\nonumber{}\begin{array}{lclccccl}\nonumber{}\beta _{2} & = & \mbox{maximize} &  &  & \beta  &  & \\\nonumber{} &  & \mbox{subject to} & l^{c}+\beta e_{i} & \leq{} & Ax & \leq{} & u^{c},\\\nonumber{} &  &  &  &  & c^{T}x-\sigma _{2}\beta  & = & z^{*},\\\nonumber{} &  &  &  &  & l^{x}\leq{}x\leq{}u^{x}. &  &\end{array}\end{math} (14.4.14)

The linearity intervals and shadow prices for [[MathCmd 477]] [[MathCmd 478]] and [[MathCmd 479]] can be computed in a similar way to how it is computed for [[MathCmd 439]].

The left and right shadow price for [[MathCmd 481]] denoted [[MathCmd 466]] and [[MathCmd 467]] respectively is given by the pair optimization problems

\begin{math}\nonumber{}\begin{array}{lclccccl}\nonumber{}\sigma _{1} & = & \mbox{minimize} &  &  & e_{j}^{T}x &  & \\\nonumber{} &  & \mbox{subject to} & l^{c}+\beta e_{i} & \leq{} & Ax & \leq{} & u^{c},\\\nonumber{} &  &  &  &  & c^{T}x & = & z^{*},\\\nonumber{} &  &  &  &  & l^{x}\leq{}x\leq{} & u^{x} &\end{array}\end{math} (14.4.15)

and

\begin{math}\nonumber{}\begin{array}{lclccccl}\nonumber{}\sigma _{2} & = & \mbox{maximize} &  &  & e_{j}^{T}x &  & \\\nonumber{} &  & \mbox{subject to} & l^{c}+\beta e_{i} & \leq{} & Ax & \leq{} & u^{c},\\\nonumber{} &  &  &  &  & c^{T}x & = & z^{*},\\\nonumber{} &  &  &  &  & l^{x}\leq{}x\leq{} & u^{x}. &\end{array}\end{math} (14.4.16)

Once again the above two optimization problems makes it easy to interpret-ate the shadow prices. Indeed assume that [[MathCmd 269]] is an arbitrary primal optimal solution then it must hold

\begin{math}\nonumber{}x_{j}^{*}\in{}[\sigma _{1},\sigma _{2}].\end{math} (14.4.17)

The linearity interval [[MathCmd 461]] for a [[MathCmd 481]] is computed as follows

\begin{math}\nonumber{}\begin{array}{rclccl}\nonumber{}\beta _{1} & = & \mbox{minimize} & \beta  &  & \\\nonumber{} &  & \mbox{subject to} & A^{T}(s_{l}^{c}-s_{u}^{c})+s_{l}^{x}-s_{u}^{x} & = & c+\beta e_{j},\\\nonumber{} &  &  & (l_{c})^{T}(s_{l}^{c})-(u_{c})^{T}(s_{u}^{c})+(l_{x})^{T}(s_{l}^{x})-(u_{x})^{T}(s_{u}^{x})-\sigma _{1}\beta  & \leq{} & z^{*},\\\nonumber{} &  &  & s_{l}^{c},s_{u}^{c},s_{l}^{c},s_{u}^{x}\geq{}0 &  &\end{array}\end{math} (14.4.18)

and

\begin{math}\nonumber{}\begin{array}{rclccl}\nonumber{}\beta _{2} & = & \mbox{maximize} & \beta  &  & \\\nonumber{} &  & \mbox{subject to} & A^{T}(s_{l}^{c}-s_{u}^{c})+s_{l}^{x}-s_{u}^{x} & = & c+\beta e_{j},\\\nonumber{} &  &  & (l_{c})^{T}(s_{l}^{c})-(u_{c})^{T}(s_{u}^{c})+(l_{x})^{T}(s_{l}^{x})-(u_{x})^{T}(s_{u}^{x})-\sigma _{2}\beta  & \leq{} & z^{*},\\\nonumber{} &  &  & s_{l}^{c},s_{u}^{c},s_{l}^{c},s_{u}^{x}\geq{}0. &  &\end{array}\end{math} (14.4.19)

14.4.4. An example

As an example we will use the following transportation problem. Consider the problem of minimizing the transportation cost between a number of production plants and stores. Each plant supplies a number of goods and each store has a given demand that must be met. Supply, demand and cost of transportation per unit are shown in Figure 14.2.

Figure 14.2: Supply, demand and cost of transportation.

If we denote the number of transported goods from location i to location j by [[MathCmd 364]], the problem can be formulated as the linear optimization problem

minimize

\begin{math}\nonumber{}\begin{array}{ccccccccccccccl}\nonumber{}1x_{{11}} & + & 2x_{{12}} & + & 5x_{{23}} & + & 2x_{{24}} & + & 1x_{{31}} & + & 2x_{{33}} & + & 1x_{{34}}\end{array}\end{math} (14.4.20)

subject to

\begin{math}\nonumber{}\begin{array}{ccccccccccccccl}\nonumber{}x_{{11}} & + & x_{{12}} &  &  &  &  &  &  &  &  &  &  & \leq{} & 400,\\\nonumber{} &  &  &  & x_{{23}} & + & x_{{24}} &  &  &  &  &  &  & \leq{} & 1200,\\\nonumber{} &  &  &  &  &  &  &  & x_{{31}} & + & x_{{33}} & + & x_{{34}} & \leq{} & 1000,\\\nonumber{}x_{{11}} &  &  &  &  &  &  & + & x_{{31}} &  &  &  &  & = & 800,\\\nonumber{} &  & x_{{12}} &  &  &  &  &  &  &  &  &  &  & = & 100,\\\nonumber{} &  &  &  & x_{{23}} & + &  &  &  &  & x_{{33}} &  &  & = & 500,\\\nonumber{} &  &  &  &  &  & x_{{24}} & + &  &  &  &  & x_{{34}} & = & 500,\\\nonumber{}x_{{11}}, &  & x_{{12}}, &  & x_{{23}}, &  & x_{{24}}, &  & x_{{31}}, &  & x_{{33}}, &  & x_{{34}} & \geq{} & 0.\end{array}\end{math} (14.4.21)

The basis type and the optimal partition type sensitivity results for the transportation problem is shown in Table 14.1 and 14.2 respectively.

Basis type
Con. [[MathCmd 495]] [[MathCmd 496]] [[MathCmd 466]] [[MathCmd 467]]
1 -300.00 0.00 3.00 3.00
2 -700.00 + 0.00 0.00
3 -500.00 0.00 3.00 3.00
4 -0.00 500.00 4.00 4.00
5 -0.00 300.00 5.00 5.00
6 -0.00 700.00 5.00 5.00
7 -500.00 700.00 2.00 2.00
Var. [[MathCmd 495]] [[MathCmd 496]] [[MathCmd 466]] [[MathCmd 467]]
[[MathCmd 503]] - 300.00 0.00 0.00
[[MathCmd 504]] - 100.00 0.00 0.00
[[MathCmd 505]] - 0.00 0.00 0.00
[[MathCmd 506]] - 500.00 0.00 0.00
[[MathCmd 507]] - 500.00 0.00 0.00
[[MathCmd 508]] - 500.00 0.00 0.00
[[MathCmd 509]] -0.000000 500.00 2.00 2.00
Optimal partition type
Con. [[MathCmd 495]] [[MathCmd 496]] [[MathCmd 466]] [[MathCmd 467]]
1 -300.00 500.00 3.00 1.00
2 -700.00 + -0.00 -0.00
3 -500.00 500.00 3.00 1.00
4 -500.00 500.00 2.00 4.00
5 -100.00 300.00 3.00 5.00
6 -500.00 700.00 3.00 5.00
7 -500.00 700.00 2.00 2.00
Var. [[MathCmd 495]] [[MathCmd 496]] [[MathCmd 466]] [[MathCmd 467]]
[[MathCmd 503]] - 300.00 0.00 0.00
[[MathCmd 504]] - 100.00 0.00 0.00
[[MathCmd 505]] - 500.00 0.00 2.00
[[MathCmd 506]] - 500.00 0.00 0.00
[[MathCmd 507]] - 500.00 0.00 0.00
[[MathCmd 508]] - 500.00 0.00 0.00
[[MathCmd 509]] - 500.00 0.00 2.00
Table 14.1: Ranges and shadow prices related to bounds on constraints and variables. Left: Results for basis type sensitivity analysis. Right: Results for the optimal partition type sensitivity analysis.

Basis type
Var. [[MathCmd 495]] [[MathCmd 496]] [[MathCmd 466]] [[MathCmd 467]]
[[MathCmd 20]] - 3.00 300.00 300.00
[[MathCmd 530]] - 100.00 100.00
[[MathCmd 531]] -2.00 0.00 0.00
[[MathCmd 532]] - 2.00 500.00 500.00
[[MathCmd 533]] -3.00 500.00 500.00
[[MathCmd 534]] - 2.00 500.00 500.00
[[MathCmd 535]] -2.00 0.00 0.00
Optimal partition type
Var. [[MathCmd 495]] [[MathCmd 496]] [[MathCmd 466]] [[MathCmd 467]]
[[MathCmd 20]] - 3.00 300.00 300.00
[[MathCmd 530]] - 100.00 100.00
[[MathCmd 531]] -2.00 0.00 0.00
[[MathCmd 532]] - 2.00 500.00 500.00
[[MathCmd 533]] -3.00 500.00 500.00
[[MathCmd 534]] - 2.00 500.00 500.00
[[MathCmd 535]] -2.00 0.00 0.00
Table 14.2: Ranges and shadow prices related to the objective coefficients. Left: Results for basis type sensitivity analysis. Right: Results for the optimal partition type sensitivity analysis.

Looking at the results from the optimal partition type sensitivity analysis we see that for the constraint number 1 we have [[MathCmd 547]] and [[MathCmd 548]]. Therefore, we have a left linearity interval of [-300,0] and a right interval of [0,500]. The corresponding left and right shadow price is 3 and 1 respectively. This implies that if the upper bound on constraint 1 increases by

\begin{math}\nonumber{}\beta \in{}[0,\beta _{1}]=[0,500]\end{math} (14.4.22)

then the optimal objective value will decrease by the value

\begin{math}\nonumber{}\sigma _{2}\beta =1\beta .\end{math} (14.4.23)

Correspondingly, if the upper bound on constraint 1 is decreased by

\begin{math}\nonumber{}\beta \in{}[0,300]\end{math} (14.4.24)

then the optimal objective value will increased by the value

\begin{math}\nonumber{}\sigma _{1}\beta =3\beta .\end{math} (14.4.25)

14.5. Sensitivity analysis with the command line tool

A sensitivity analysis can be performed with the MOSEK command line tool using the command

mosek myproblem.mps -sen sensitivity.ssp

where sensitivity.ssp is a file in the format described in the next section. The ssp file describes which parts of the problem the sensitivity analysis should be performed on.

By default results are written to a file named myproblem.sen. If necessary, this filename can be changed by setting the
MSK_SPAR_SENSITIVITY_RES_FILE_NAME
parameter By default a basis type sensitivity analysis is performed. However, the type of sensitivity analysis (basis or optimal partition) can be changed by setting the parameter
MSK_IPAR_SENSITIVITY_TYPE
appropriately. Following values are accepted for this parameter:

It is also possible to use the command line

mosek myproblem.mps -d MSK_IPAR_SENSITIVITY_ALL MSK_ON

in which case a sensitivity analysis on all the parameters is performed.

14.5.1. Sensitivity analysis specification file

MOSEK employs an MPS like file format to specify on which model parameters the sensitivity analysis should be performed. As the optimal partition type sensitivity analysis can be computationally expensive it is important to limit the sensitivity analysis.

* A comment
BOUNDS CONSTRAINTS
 U|L|LU [cname1]
 U|L|LU [cname2]-[cname3]
BOUNDS VARIABLES
 U|L|LU [vname1]
 U|L|LU [vname2]-[vname3]
OBJECTIVE VARIABLES
 [vname1]
 [vname2]-[vname3]
Figure 14.3: The sensitivity analysis file format.

The format of the sensitivity specification file is shown in figure 14.3, where capitalized names are keywords, and names in brackets are names of the constraints and variables to be included in the analysis.

The sensitivity specification file has three sections, i.e.

  • BOUNDS CONSTRAINTS: Specifies on which bounds on constraints the sensitivity analysis should be performed.
  • BOUNDS VARIABLES: Specifies on which bounds on variables the sensitivity analysis should be performed.
  • OBJECTIVE VARIABLES: Specifies on which objective coefficients the sensitivity analysis should be performed.

A line in the body of a section must begin with a whitespace. In the BOUNDS sections one of the keys L, U, and LU must appear next. These keys specify whether the sensitivity analysis is performed on the lower bound, on the upper bound, or on both the lower and the upper bound respectively. Next, a single constraint (variable) or range of constraints (variables) is specified.

Recall from Section 14.4.1.1 that equality constraints are handled in a special way. Sensitivity analysis of an equality constraint can be specified with either L, U, or LU, all indicating the same, namely that upper and lower bounds (which are equal) are perturbed simultaneously.

As an example consider

BOUNDS CONSTRAINTS
 L  "cons1"
 U  "cons2"
 LU "cons3"-"cons6"

which requests that sensitivity analysis is performed on the lower bound of the constraint named cons1, on the upper bound of the constraint named cons2, and on both lower and upper bound on the constraints named cons3 to cons6.

It is allowed to use indexes instead of names, for instance

BOUNDS CONSTRAINTS
 L  "cons1"
 U  2
 LU 3 - 6

The character “*” indicates that the line contains a comment and is ignored.

14.5.2. Example: Sensitivity analysis from command line

As an example consider the sensitivity.ssp file shown in Figure 14.4.

* Comment 1

BOUNDS CONSTRAINTS
 U "c1"         * Analyze upper bound for constraint named c1
 U 2            * Analyze upper bound for the second constraint
 U 3-5          * Analyze upper bound for constraint number 3 to number 5

BOUNDS VARIABLES
 L 2-4          * This section specifies which bounds on variables should be analyzed
 L "x11"
OBJECTIVE VARIABLES
 "x11"          * This section specifies which objective coefficients should be analyzed
 2

Figure 14.4: Example of the sensitivity file format.

The command

mosek transport.lp -sen sensitivity.ssp  -d MSK_IPAR_SENSITIVITY_TYPE MSK_SENSITIVITY_TYPE_BASIS

produces the transport.sen file shown below.

BOUNDS CONSTRAINTS
INDEX    NAME             BOUND    LEFTRANGE        RIGHTRANGE       LEFTPRICE        RIGHTPRICE
0        c1               UP       -6.574875e-18    5.000000e+02     1.000000e+00     1.000000e+00
2        c3               UP       -6.574875e-18    5.000000e+02     1.000000e+00     1.000000e+00
3        c4               FIX      -5.000000e+02    6.574875e-18     2.000000e+00     2.000000e+00
4        c5               FIX      -1.000000e+02    6.574875e-18     3.000000e+00     3.000000e+00
5        c6               FIX      -5.000000e+02    6.574875e-18     3.000000e+00     3.000000e+00

BOUNDS VARIABLES
INDEX    NAME             BOUND    LEFTRANGE        RIGHTRANGE       LEFTPRICE        RIGHTPRICE
2        x23              LO       -6.574875e-18    5.000000e+02     2.000000e+00     2.000000e+00
3        x24              LO       -inf             5.000000e+02     0.000000e+00     0.000000e+00
4        x31              LO       -inf             5.000000e+02     0.000000e+00     0.000000e+00
0        x11              LO       -inf             3.000000e+02     0.000000e+00     0.000000e+00

OBJECTIVE VARIABLES
INDEX    NAME                      LEFTRANGE        RIGHTRANGE       LEFTPRICE        RIGHTPRICE
0        x11                       -inf             1.000000e+00     3.000000e+02     3.000000e+02
2        x23                       -2.000000e+00    +inf             0.000000e+00     0.000000e+00

14.5.3. Controlling log output

Setting the parameter

MSK_IPAR_LOG_SENSITIVITY

to 1 or 0 (default) controls whether or not the results from sensitivity calculations are printed to the message stream.

The parameter

MSK_IPAR_LOG_SENSITIVITY_OPT

controls the amount of debug information on internal calculations from the sensitivity analysis.

Wed Feb 29 16:20:50 2012