6. Advanced API tutorial


This chapter provides information about additional problem classes and functionality provided in the C API.

6.1. Separable convex optimization

In this section we will discuss solution of nonlinear separable convex optimization problems using MOSEK. We allow both nonlinear constraints and objective, but restrict ourselves to separable functions.

6.1.1. The problem

A general separable nonlinear optimization problem can be specified as follows:

\begin{math}\nonumber{}\begin{array}{lccccc}\nonumber{}\mbox{minimize} &  &  & f(x)+c^{T}x &  & \\\nonumber{}\mbox{subject to } & l^{c} & \leq{} & g(x)+Ax & \leq{} & u^{c},\\\nonumber{} & l^{x} & \leq{} & x & \leq{} & u^{x},\end{array}\end{math} (6.1.1)

where

  • m is the number of constraints.
  • n is the number of decision variables.
  • [[MathCmd 131]] is a vector of decision variables.
  • [[MathCmd 132]] is the linear part of the objective function.
  • [[MathCmd 133]] is the constraint matrix.
  • [[MathCmd 134]] is the lower limit on the activity for the constraints.
  • [[MathCmd 135]] is the upper limit on the activity for the constraints.
  • [[MathCmd 136]] is the lower limit on the activity for the variables.
  • [[MathCmd 137]] is the upper limit on the activity for the variables.
  • [[MathCmd 138]] is a nonlinear function.
  • [[MathCmd 139]] is a nonlinear vector function.

This implies that the ith constraint essentially has the form

\begin{displaymath}\nonumber{}l_{i}^{c}\leq{}g_{i}(x)+\sum \limits _{{j=1}}^{n}a_{{ij}}x_{j}\leq{}u_{i}^{c}.\end{displaymath}

The problem (6.1.1) must satisfy the three important requirements:

  1. Separability: This requirement implies that all nonlinear functions can be written on the form

    \begin{displaymath}\nonumber{}f(x)=\sum \limits _{{j=1}}^{n}f^{j}(x_{j})\end{displaymath}

    and

    \begin{displaymath}\nonumber{}g_{i}(x)=\sum \limits _{{j=1}}^{n}g_{i}^{j}(x_{j})\end{displaymath}

    where

    \begin{displaymath}\nonumber{}f^{j}:~\mathbb{R}\rightarrow \mathbb{R}\mbox{ and }g_{i}^{j}:~\mathbb{R}\rightarrow \mathbb{R}.\end{displaymath}

    Hence, the nonlinear functions can be written as a sum of functions which depends only one variable.

  2. Differentiability: All functions should be twice differentiable for all [[MathCmd 144]] satisfying

    \begin{displaymath}\nonumber{}l_{j}^{x}<x<u_{j}^{x}\end{displaymath}

    if [[MathCmd 144]] occurs in at least one nonlinear function.

  3. Convexity: The problem should be a convex optimization problem. See Section 7.4 for a discussion of this requirement.

6.1.2. A numerical example

Subsequently, we will use the following example to demonstrate the solution of a separable convex optimization problem using MOSEK:

\begin{math}\nonumber{}\begin{array}{lccl}\nonumber{}\mbox{minimize} & x_{1}-\ln (x_{1}+2x_{2}) &  & \\\nonumber{}\mbox{subject to} & x_{1}^{2}+x_{2}^{2} & \leq{} & 1\end{array}\end{math} (6.1.2)

First note that the problem (6.1.2) is not a separable optimization problem because the logarithmic term in the objective is not a function of a single variable. However, by introducing a constraint and a variable the problem can be made separable as follows

\begin{math}\nonumber{}\begin{array}{lccl}\nonumber{}\mbox{minimize} & x_{1}-\ln (x_{3}) &  & \\\nonumber{}\mbox{subject to} & x_{1}^{2}+x_{2}^{2} & \leq{} & 1,\\\nonumber{} & x_{1}+2x_{2}-x_{3} & = & 0,\\\nonumber{} & x_{3}\geq{}0. &  &\end{array}\end{math} (6.1.3)

This problem is obviously separable and equivalent to the previous problem. Moreover, note that all nonlinear functions are well defined for x values satisfying the variable bounds strictly, i.e.

\begin{displaymath}\nonumber{}x_{3}>0.\end{displaymath}

This assures sure that function evaluation errors will not occur during the optimization process because MOSEK will only evaluate [[MathCmd 150]] for [[MathCmd 151]].

Frequently the method employed above can be used to make convex optimization problems separable even if these are not formulated as such initially. The reader might object that this approach is inefficient because additional constraints and variables are introduced to make the problem separable. However, in our experience this drawback is offset largely by the much simpler structure of the nonlinear functions. Particularly, the evaluation of the nonlinear functions, their gradients and Hessians is much easier in the separable case.

6.1.3. scopt an optimizer for separable convex optimization

scopt is an easy-to-use interface to MOSEKs nonlinear capabilities for solving separable convex problems. As currently implemented scopt is not capable of handling arbitrary nonlinear expressions. In fact scopt can handle only the nonlinear expressions [[MathCmd 152]], [[MathCmd 153]], [[MathCmd 154]], and [[MathCmd 155]]. However, in a subsequent section we will demonstrate that it is easy to expand the number of nonlinear expressions that scopt can handle.

6.1.3.1. Design principles of scopt

All the linear data of the problem, such as c and A, is inputted to MOSEK as usual, i.e. using the relevant functions in the MOSEK API.

The nonlinear part of the problem is specified using some arrays which indicate the type of the nonlinear expressions and where these should be added.

For example given the three int arrays — oprc, opric, and oprjc — and the two double arrays — oprfc and oprgc — the nonlinear expressions in the constraints can be coded in those arrays using the following table:

oprc[k] opric[k] oprjc[k] oprfc[k] oprgc[k] oprhc[k] Expression added
  to constraint i
0 i j f g h [[MathCmd 156]]
1 i j f g h [[MathCmd 157]]
2 i j f g h [[MathCmd 158]]
3 i j f g h [[MathCmd 159]]

Hence, oprc[k] specifies the nonlinear expression type, opric[k] indicates to which constraint the nonlinear expression should be added. oprfc[k] and oprgc[k] are parameters used when the nonlinear expression is evaluated. This implies that nonlinear expressions can be added to an arbitrary constraint and hence you can create multiple nonlinear constraints.

Using the same method all the nonlinear terms in the objective can be specified using opro[k], oprjo[k], oprfo[k] and oprho[k] as shown below:

opro[k] oprjo[k] oprfo[k] oprgo[k] oprho[k] Expression added
  in objective
0 j f g h [[MathCmd 156]]
1 j f g h [[MathCmd 157]]
2 j f g h [[MathCmd 158]]
3 j f g h [[MathCmd 159]]

6.1.3.2. Example

Suppose we want to add the nonlinear expression [[MathCmd 164]] to the objective. This is an expression on the form [[MathCmd 158]] where f=-1, g=1, h=0 and j=3. This can be represented by:

opro[0]  = 2 
oprjo[0] = 3
oprfo[0] = -1.0
oprgo[0] = 1.0
oprho[0] = 0.0

6.1.3.3. Source code

The source code for scopt consists of the files:

  • scopt.h: An include file that defines the two functions MSK_scbegin and MSK_scend, which are used to initialize and remove the nonlinear function data.
  • scopt.c : This file implements the nonlinear initialization and evaluation functions.
  • tstscopt.c : This file solves the example problem (6.1.2) using scopt.c.

These three files are all available in the directory

mosek\6\tools\examples\c

We will not discuss the implementation of scopt in details but rather refer the reader to the source code found in scopt.c which is included in the distribution. However, the driver program tstscopt.c which solves the example (6.1.2).

/* Copyright: Copyright (c) 1998-2011 MOSEK ApS, Denmark. All rights reserved. File : tstscopt.c Purpose : To solve the problem minimize x_1 - log(x_3) subject to x_1^2 + x_2^2 <= 1 x_1 + 2*x_2 - x_3 = 0 x_3 >=0 */ #include "scopt.h" #define NUMOPRO 1 /* Number of nonlinear expressions in the obj. */ #define NUMOPRC 2 /* Number of nonlinear expressions in the con. */ #define NUMVAR 3 /* Number of variables. */ #define NUMCON 2 /* Number of constraints. */ #define NUMANZ 3 /* Number of non-zeros in A. */ static void MSKAPI printstr(void *handle, char str[]) { printf("%s",str); } /* printstr */ int main() { char buffer[MSK_MAX_STR_LEN]; double oprfo[NUMOPRO],oprgo[NUMOPRO],oprho[NUMOPRO], oprfc[NUMOPRC],oprgc[NUMOPRC],oprhc[NUMOPRC], c[NUMVAR],aval[NUMANZ], blc[NUMCON],buc[NUMCON],blx[NUMVAR],bux[NUMVAR]; int numopro,numoprc, numcon=NUMCON,numvar=NUMVAR, opro[NUMOPRO],oprjo[NUMOPRO], oprc[NUMOPRC],opric[NUMOPRC],oprjc[NUMOPRC], aptrb[NUMVAR],aptre[NUMVAR],asub[NUMANZ]; MSKboundkeye bkc[NUMCON],bkx[NUMVAR]; MSKenv_t env; MSKrescodee r; MSKtask_t task; schand_t sch; /* Specify nonlinear terms in the objective. */ numopro = NUMOPRO; opro[0] = MSK_OPR_LOG; /* Defined in scopt.h */ oprjo[0] = 2; oprfo[0] = -1.0; oprgo[0] = 1.0; /* This value is never used. */ oprho[0] = 0.0; /* Specify nonlinear terms in the constraints. */ numoprc = NUMOPRC; oprc[0] = MSK_OPR_POW; opric[0] = 0; oprjc[0] = 0; oprfc[0] = 1.0; oprgc[0] = 2.0; oprhc[0] = 0.0; oprc[1] = MSK_OPR_POW; opric[1] = 0; oprjc[1] = 1; oprfc[1] = 1.0; oprgc[1] = 2.0; oprhc[1] = 0.0; /* Specify c */ c[0] = 1.0; c[1] = 0.0; c[2] = 0.0; /* Specify a. */ aptrb[0] = 0; aptrb[1] = 1; aptrb[2] = 2; aptre[0] = 1; aptre[1] = 2; aptre[2] = 3; asub[0] = 1; asub[1] = 1; asub[2] = 1; aval[0] = 1.0; aval[1] = 2.0; aval[2] = -1.0; /* Specify bounds for constraints. */ bkc[0] = MSK_BK_UP; bkc[1] = MSK_BK_FX; blc[0] = -MSK_INFINITY; blc[1] = 0.0; buc[0] = 1.0; buc[1] = 0.0; /* Specify bounds for variables. */ bkx[0] = MSK_BK_FR; bkx[1] = MSK_BK_FR; bkx[2] = MSK_BK_LO; blx[0] = -MSK_INFINITY; blx[1] = -MSK_INFINITY; blx[2] = 0.0; bux[0] = MSK_INFINITY; bux[1] = MSK_INFINITY; bux[2] = MSK_INFINITY; /* Create the mosek environment. */ r = MSK_makeenv(&env,NULL,NULL,NULL,NULL); /* Check whether the return code is ok. */ if ( r==MSK_RES_OK ) { /* Directs the log stream to the user specified procedure 'printstr'. */ MSK_linkfunctoenvstream(env,MSK_STREAM_LOG,NULL,printstr); } if ( r==MSK_RES_OK ) { /* Initialize the environment. */ r = MSK_initenv(env); } if ( r==MSK_RES_OK ) { /* Make the optimization task. */ r = MSK_makeemptytask(env,&task); if ( r==MSK_RES_OK ) MSK_linkfunctotaskstream(task,MSK_STREAM_LOG,NULL,printstr); if ( r==MSK_RES_OK ) { r = MSK_inputdata(task, numcon,numvar, numcon,numvar, c,0.0, aptrb,aptre, asub,aval, bkc,blc,buc, bkx,blx,bux); } if ( r== MSK_RES_OK ) { /* Set-up of nonlinear expressions. */ r = MSK_scbegin(task, numopro,opro,oprjo,oprfo,oprgo,oprho, numoprc,oprc,opric,oprjc,oprfc,oprgc,oprhc, &sch); if ( r==MSK_RES_OK ) { printf("Start optimizing\n"); r = MSK_optimize(task); printf("Done optimizing\n"); MSK_solutionsummary(task,MSK_STREAM_MSG); } /* The nonlinear expressions are no longer needed. */ MSK_scend(task,&sch); } MSK_deletetask(&task); } MSK_deleteenv(&env); printf("Return code: %d\n",r); if ( r!=MSK_RES_OK ) { MSK_getcodedisc(r,buffer,NULL); printf("Description: %s\n",buffer); } } /* main */

6.1.3.4. Adding more nonlinear expression types

scopt handles only a limited number of nonlinear expression types. However, it is easy to add a new operator such as the square root operator. First step is to define the new operator in the file scopt.h that after modification contains the lines

#define MSK_OPR_ENT 0
#define MSK_OPR_EXP 1
#define MSK_OPR_LOG 2
#define MSK_OPR_POW 3
#define MSK_OPR_SQRT 4 /* constant for square root operator */

Next the function evalopr in the file scopt.c should be modified. The purpose of evalopr is to compute the function value, the gradient (first derivative), and the Hessian (second derivative) for a nonlinear expression. The modified function has the form:

static int evalopr(int opr, double f, double g, double h, double xj, double *fxj, double *grdfxj, double *hesfxj) /* Purpose: To evaluate an operator and its derivatives. fxj: Is the function value grdfxj: Is the first derivative. hexfxj: Is the second derivative. */ { double rtemp; switch ( opr ) { case MSK_OPR_ENT: if ( fxj ) fxj[0] = f*xj*log(xj); if ( grdfxj ) grdfxj[0] = f*(1.0+log(xj)); if ( hesfxj ) hesfxj[0] = f/xj; break; case MSK_OPR_EXP: if ( fxj || grdfxj || hesfxj ) { rtemp = exp(g*xj+h); if ( fxj ) fxj[0] = f*rtemp; if ( grdfxj ) grdfxj[0] = f*g*rtemp; if ( hesfxj ) hesfxj[0] = f*g*g*rtemp; } break; case MSK_OPR_LOG: rtemp = g*xj+h; if ( rtemp<=0.0 ) return ( 1 ); if ( fxj ) fxj[0] = f*log(rtemp); if ( grdfxj ) grdfxj[0] = (g*f)/(rtemp); if ( hesfxj ) hesfxj[0] = -(f*g*g)/(rtemp*rtemp); break; case MSK_OPR_POW: if ( fxj ) fxj[0] = f*pow(xj+h,g); if ( grdfxj ) grdfxj[0] = f*g*pow(xj+h,g-1.0); if ( hesfxj ) hesfxj[0] = f*g*(g-1.0)*pow(xj+h,g-2.0); break; case MSK_OPR_SQRT: /* handle operator f * sqrt(x+g) */ if ( fxj ) fxj[0] = f*sqrt(g*xj+h); /* The function value. */ if ( grdfxj ) grdfxj[0] = 0.5*f*g/sqrt(g*xj+h); /* The gradient. */ if ( hesfxj ) hesfxj[0] = -0.25*f*g*g*pow(g*xj+h,-1.5); break; default: printf("scopt.c: Unknown operator %d\n",opr); exit(0); } return ( 0 ); } /* evalopr */

6.2. Exponential optimization

6.2.1. The problem

An exponential optimization problem has the form

\begin{math}\nonumber{}\begin{array}{lccll}\nonumber{}\mbox{minimize} & \sum \limits _{{k\in{}J_{0}}}c_{k}e^{{\left(\sum \limits _{{j=0}}^{{n-1}}a_{{k,j}}x_{j}\right)}} &  &  & \\\nonumber{}\mbox{subject to} & \sum \limits _{{k\in{}J_{i}}}c_{k}e^{{\left(\sum \limits _{{j=0}}^{{n-1}}a_{{k,j}}x_{j}\right)}} & \leq{} & 1, & i=1,\ldots ,m,\\\nonumber{} & x\in{}\mathbb{R}^{n} &  &  &\end{array}\end{math} (6.2.1)

where it is assumed that

\begin{displaymath}\nonumber{}\cup _{{i=0}}^{m}J_{k}=\lbrace{}1,\ldots ,T\rbrace{}\end{displaymath}

and

\begin{displaymath}\nonumber{}J_{i}\cap J_{j}=\emptyset\end{displaymath}

if [[MathCmd 169]].

Given

\begin{displaymath}\nonumber{}c_{i}>0,~,i=1,\ldots ,T\end{displaymath}

the problem (6.2.1) is a convex optimization problem which can be solved using MOSEK. We will call

\begin{displaymath}\nonumber{}c_{t}e^{{\left(\sum \limits _{{j=0}}^{{n-1}}a_{{t,j}}x_{j}\right)}}=e^{{\left(\log (c_{t})+\sum \limits _{{j=0}}^{{n-1}}a_{{t,j}}x_{j}\right)}}\end{displaymath}

for a term and hence the number of terms is T.

As stated the problem (6.2.1) is a nonseparable problem. However, using

\begin{displaymath}\nonumber{}v_{t}=e^{{\left(\log (c_{t})+\sum \limits _{{j=0}}^{{n-1}}a_{{tj}}x_{j}\right)}}\end{displaymath}

we obtain the separable problem

\begin{math}\nonumber{}\begin{array}{lccll}\nonumber{}\mbox{minimize} & \sum \limits _{{t\in{}J_{0}}}e^{{v_{t}}} &  &  & \\\nonumber{}\mbox{subject to} & \sum \limits _{{t\in{}J_{i}}}e^{{v_{t}}} & \leq{} & 1, & i=1,\ldots ,m,\\\nonumber{} & \sum \limits _{{j=0}}^{{n-1}}a_{{t,j}}x_{j}-v_{t} & = & -\log (c_{t}), & t=0,\ldots ,T,\end{array}\end{math} (6.2.2)

which could be solved using the scopt interface discussed in Section 6.1. A warning about this approach is that computing the function

\begin{displaymath}\nonumber{}e^{x}\end{displaymath}

using double-precision floating point numbers is only possible for small values of x in absolute value. Indeed [[MathCmd 153]] grows very rapidly for larger x values, and numerical problems may arise when solving the problem on this form.

It is also possible to reformulate the exponential optimization problem (6.2.1) as a dual geometric geometric optimization problem (6.4.1). This is often the preferred solution approach because it is computationally more efficient and the numerical problems associated with evaluating [[MathCmd 153]] for moderately large x values are avoided.

6.2.2. Source code

The MOSEK distribution includes the source code for a program that enables you to:

  1. Read (and write) a data file stating an exponential optimization problem.
  2. Verify that the input data is reasonable.
  3. Solve the problem via the exponential optimization problem (6.2.2) or the dual geometric optimization problem (6.4.1).
  4. Write a solution file.

6.2.3. Solving from the command line.

In the following we will discuss the program mskexpopt, which is included in the MOSEK distribution, in both source code and compiled form. Hence, you can solve exponential optimization problems using the operating system command line or directly from your own C program.

6.2.3.1. The input format

First we will define a text input format for specifying an exponential optimization problem. This is as follows:

\begin{displaymath}\nonumber{}\begin{array}{rll}\nonumber{}\mbox{* This is a comment} &  & \\\nonumber{}numcon &  & \\\nonumber{}numvar &  & \\\nonumber{}numter &  & \\\nonumber{}c_{1} &  & \\\nonumber{}c_{2} &  & \\\nonumber{}\vdots  &  & \\\nonumber{}c_{{numter}} &  & \\\nonumber{}i_{1} &  & \\\nonumber{}i_{2} &  & \\\nonumber{}\vdots  &  & \\\nonumber{}i_{{numter}} &  & \\\nonumber{}t_{1} & j_{{1}} & a_{{t_{1},j_{1}}}\\\nonumber{}t_{2} & j_{{2}} & a_{{t_{2},j_{2}}}\\\nonumber{}\vdots  & \vdots  & \vdots\end{array}\end{displaymath}

The first line is an optional comment line. In general everything occurring after a * is considered a comment. Lines 2 to 4 inclusive define the number of constraints (m), the number of variables (n), and the number of terms T in the problem. Then follows three sections containing the problem data.

The first section

\begin{displaymath}\nonumber{}\begin{array}{ll}\nonumber{}c_{1} & \\\nonumber{}c_{2} & \\\nonumber{}\vdots  & \\\nonumber{}c_{{numter}} &\end{array}\end{displaymath}

lists the coefficients [[MathCmd 179]] of each term t in their natural order.

The second section

\begin{displaymath}\nonumber{}\begin{array}{ll}\nonumber{}i_{1} & \\\nonumber{}i_{2} & \\\nonumber{}\vdots  & \\\nonumber{}i_{{numter}} &\end{array}\end{displaymath}

specifies to which constraint each term belongs. Hence, for instance [[MathCmd 181]] means that the term number 2 belongs to constraint 5. [[MathCmd 182]] means that term number t belongs to the objective.

The third section

\begin{displaymath}\nonumber{}\begin{array}{lll}\nonumber{}t_{1} & j_{{1}} & a_{{t_{1},j_{1}}}\\\nonumber{}t_{2} & j_{{2}} & a_{{t_{2},j_{2}}}\\\nonumber{}\vdots  & \vdots  & \vdots\end{array}\end{displaymath}

defines the non-zero [[MathCmd 184]] values. For instance the entry

\begin{displaymath}\nonumber{}\begin{array}{lll}\nonumber{}1 & 3 & 3.3\end{array}\end{displaymath}

implies that [[MathCmd 186]] for t=1 and j=3.

Please note that each [[MathCmd 184]] should be specified only once.

6.2.4. Choosing primal or dual form

One can choose to solve the exponential optimization problem directly in the primal form (6.2.2) or on the dual form. By default mskexpopt solves a problem on the dual form since usually this is more efficient. The command line option

-primal 

chooses the primal form.

6.2.5. An example

Consider the problem:

\begin{math}\nonumber{}\begin{array}{lccccl}\nonumber{}\mbox{minimize} & 40e^{{-x_{1}-1/2x_{2}-x_{3}}}+20e^{{x_{1}+x_{3}}}+40e^{{x_{1}+x_{2}+x_{3}}} &  & \\\nonumber{}\mbox{subject to} & \frac{1}{3}e^{{-2x_{1}-2x_{2}}}+\frac{4}{3}e^{{1/2x_{2}-x_{3}}} & \leq{} & 1.\end{array}\end{math} (6.2.3)

This small problem can be specified as follows using the input format:

* File : expopt1.eo

1   * numcon
3   * numvar
5   * numter

* Coefficients of terms

40           
20
40
0.3333333
1.3333333

* For each term, the index of the 
* constraints to the term belongs

0     
0
0
1
1

* Section defining a_kj

0 0 -1
0 1 -0.5
0 2 -1
1 0 1.0
1 2 1.0
2 0 1.0
2 1 1.0
2 2 1.0
3 0 -2
3 1 -2
4 1 0.5
4 2 -1.0

Using the program mskexpopt included in the MOSEK distribution the example can be solved. Indeed the command line

mskexpopt expopt1.eo

will produce the solution file expopt1.sol shown below.

PROBLEM STATUS      : PRIMAL_AND_DUAL_FEASIBLE
SOLUTION STATUS     : OPTIMAL
PRIMAL OBJECTIVE    : 1.331371e+02

VARIABLES
INDEX   ACTIVITY
1       6.931471e-01
2       -6.931472e-01
3       3.465736e-01

6.2.6. Solving from your C code

The C source code for solving an exponential optimization problem is included in the MOSEK distribution. The relevant source code consists of the files:

expopt.h:

Defines prototypes for the functions:

MSK_expoptread:

Reads a problem from a file.

MSK_expoptsetup:

Sets up a problem. The function takes the arguments:

  • expopttask: A MOSEK task structure.
  • solveform: If 0, then the optimizer will choose whether the problem is solved on primal or dual form. If -1 the primal form is used and if 1 the dual form.
  • numcon: Number of constraints.
  • numvar: Number of variables.
  • numter: Number of terms T.
  • *subi: Array of length numter defining which constraint a term belongs to or zero for the objective.
  • *c: Array of length numter containing coefficients for the terms.
  • numanz: Length of subk, subj, and akj.
  • *subk: Term indexes.
  • *subj: Variable indexes.
  • *akj: akj[i] is coefficient of variable subj[i] in term subk[i], i.e.

    \begin{displaymath}\nonumber{}a_{{\mathtt{subk}[i],\mathtt{subj}[i]}}=\mathtt{akj}[i].\end{displaymath}
  • *expopthnd: Data structure containing nonlinear information.
MSK_expoptimize:

Solves the problem and returns the problem status and the optimal primal solution.

MSK_expoptfree:

Frees data structures allocated by MSK_expoptsetup.

expopt.c:

Implements the functions specified in expopt.h.

mskexpopt.c:

A command line interface.

As a demonstration of the interface a C program that solves (6.2.3) is included below.

/* Copyright: Copyright (c) 1998-2011 MOSEK ApS, Denmark. All rights reserved. File : tstexpopt.c Purpose : To demonstrate a simple interface for exponential optimization. */ #include <string.h> #include "expopt.h" void MSKAPI printcb(void* handle,char str[]) { printf("%s",str); } int main (int argc, char **argv) { int r = MSK_RES_OK, numcon = 1, numvar = 3 , numter = 5; int subi[] = {0,0,0,1,1}; int subk[] = {0,0,0,1,1,2,2,2,3,3,4,4}; double c[] = {40.0,20.0,40.0,0.333333,1.333333}; int subj[] = {0,1,2,0,2,0,1,2,0,1,1,2}; double akj[] = {-1,-0.5,-1.0,1.0,1.0,1.0,1.0,1.0,-2.0,-2.0,0.5,-1.0}; int numanz = 12; double objval; double xx[3]; double y[5]; MSKenv_t env; MSKprostae prosta; MSKsolstae solsta; MSKtask_t expopttask; expopthand_t expopthnd = NULL; /* Pointer to data structure that holds nonlinear information */ if (r == MSK_RES_OK) r = MSK_makeenv ( &env, NULL, NULL, NULL, NULL); if (r == MSK_RES_OK) r = MSK_linkfunctoenvstream(env,MSK_STREAM_LOG,NULL,printcb); if (r == MSK_RES_OK) r = MSK_initenv(env); if (r == MSK_RES_OK) MSK_makeemptytask(env,&expopttask); if (r == MSK_RES_OK) r = MSK_linkfunctotaskstream(expopttask,MSK_STREAM_LOG,NULL,printcb); if (r == MSK_RES_OK) { /* Initialize expopttask with problem data */ r = MSK_expoptsetup(expopttask, 1, /* Solve the dual formulation */ numcon, numvar, numter, subi, c, subk, subj, akj, numanz, &expopthnd /* Pointer to data structure holding nonlinear data */ ); } /* Any parameter can now be changed with standard mosek function calls */ if (r == MSK_RES_OK) r = MSK_putintparam(expopttask,MSK_IPAR_INTPNT_MAX_ITERATIONS,200); /* Optimize, xx holds the primal optimal solution, y holds solution to the dual problem if the dual formulation is used */ if (r == MSK_RES_OK) r = MSK_expoptimize(expopttask, &prosta, &solsta, &objval, xx, y, &expopthnd); /* Free data allocated by expoptsetup */ if (expopthnd) MSK_expoptfree(expopttask, &expopthnd); MSK_deletetask(&expopttask); MSK_deleteenv(&env); }

6.2.7. A warning about exponential optimization problems

Exponential optimization problem may in some cases have a final optimal objective value for a solution containing infinite values. Consider the simple example

\begin{displaymath}\nonumber{}\begin{array}{rl}\nonumber{}\mathrm{min} & e^{x}\\\nonumber{}\mathrm{s.t.} & x\in{}\mathbb{R}{},\end{array}\end{displaymath}

which has the optimal objective value 0 at x=-. Similar problems can occur in constraints.

Such a solution can not in general be obtained by numerical methods, which means that MOSEK will act unpredictably in these situations — possibly failing to find a meaningful solution or simply stalling.

6.3. General convex optimization

MOSEK provides an interface for general convex optimization which is discussed in this section.

6.3.1. A warning

Using the general convex optimization interface in MOSEK is complicated. It is recommended to use the conic solver, the quadratic solver or the scopt interface whenever possible. Alternatively GAMS or AMPL with MOSEK as solver are well-suited for general convex optimization problems.

6.3.2. The problem

A general nonlinear convex optimization problem is to minimize or maximize an objective function of the form

\begin{math}\nonumber{}\displaystyle f(x)+\frac{1}{2}\sum _{{i=0}}^{{n-1}}\sum _{{j=0}}^{{n-1}}q_{{i,j}}^{o}x_{i}x_{j}+\sum _{{j=0}}^{{n-1}}c_{j}x_{j}+c^{f}\end{math} (6.3.1)

subject to the functional constraints

\begin{math}\nonumber{}\displaystyle l_{k}^{c}\leq{}g_{k}(x)+\frac{1}{2}\sum _{{i=0}}^{{n-1}}\sum _{{j=0}}^{{n-1}}q_{{i,j}}^{k}x_{i}x_{j}+\sum _{{j=0}}^{{n-1}}a_{{k,j}}x_{j}\leq{}u_{k}^{c},~k=0,\ldots ,m-1,\end{math} (6.3.2)

and the bounds

\begin{math}\nonumber{}\displaystyle l_{j}^{x}\leq{}x_{j}\leq{}u_{j}^{x},~j=0,\ldots ,n-1.\end{math} (6.3.3)

Please note that this problem is a generalization of linear and quadratic optimization. This implies that the parameters c, A, [[MathCmd 36]], Q, and so forth denote the same as in the case of linear and quadratic optimization. All linear and quadratic terms should be inputted to MOSEK as described for these problem classes. The general convex part of the problems is defined by the functions f(x) and [[MathCmd 195]], which must be general nonlinear, twice differentiable functions.

6.3.3. Assumptions about a nonlinear optimization problem

MOSEK makes two assumptions about the optimization problem.

The first assumption is that all functions are at least twice differentiable on their domain. More precisely, f(x) and g(x) must be at least twice differentiable for all x so that

\begin{displaymath}\nonumber{}l^{x}<x<u^{x}.\end{displaymath}

The second assumption is that

\begin{math}\nonumber{}f(x)+\frac{1}{2}x^{T}Q^{o}x\end{math} (6.3.4)

must be a convex function if the objective is minimized. Otherwise if the objective is maximized it must be a concave function. Moreover,

\begin{math}\nonumber{}g_{k}(x)+\frac{1}{2}x^{T}Q^{k}x\end{math} (6.3.5)

must be a convex function if

\begin{displaymath}\nonumber{}u_{k}^{c}<\infty\end{displaymath}

and a concave function if

\begin{displaymath}\nonumber{}l_{k}^{c}>-\infty .\end{displaymath}

Note in particular that nonlinear equalities are not allowed.

If these two assumptions are not satisfied, then it cannot be guaranteed that MOSEK produces correct results or works at all.

6.3.4. Specifying general convex terms

MOSEK receives information about the general convex terms via two call-back functions implemented by the user:

  • MSK_nlgetspfunc: For parsing information on structural information about f and g.
  • MSK_nlgetvafunc: For parsing information on numerical information about f and g.

The call-back functions are passed to MOSEK with the function MSK_putnlfunc.

For an example of using the general convex framework see Section 6.4.

6.4. Dual geometric optimization

Dual geometric is a special class of nonlinear optimization problems involving a nonlinear and nonseparable objective function. In this section we will show how to solve dual geometric optimization problems using MOSEK.

6.4.1. The problem

Consider the dual geometric optimization problem

\begin{math}\nonumber{}\begin{array}{lccl}\nonumber{}\mbox{maximize} & f(x) &  & \\\nonumber{}\mbox{subject to} & Ax & = & b,\\\nonumber{} & x\geq{}0, &  &\end{array}\end{math} (6.4.1)

where [[MathCmd 133]] and all other quantities have conforming dimensions. Let t be an integer and p be a vector of t+1 integers satisfying the conditions

\begin{displaymath}\nonumber{}\begin{array}{rcll}\nonumber{}p_{0} & = & 0, & \\\nonumber{}p_{i} & < & p_{{i+1}}, & i=1,\ldots ,t,\\\nonumber{}p_{{t}} & = & n. &\end{array}\end{displaymath}

Then f can be stated as follows

\begin{displaymath}\nonumber{}f(x)=\sum \limits _{{j=0}}^{{n-1}}x_{j}\ln \left(\frac{v_{j}}{x_{j}}\right)+\sum \limits _{{i=1}}^{t}\left(\sum \limits _{{j=p_{i}}}^{{p_{{i+1}}-1}}x_{j}\right)\ln \left(\sum \limits _{{j=p_{i}}}^{{p_{{i+1}}-1}}x_{j}\right)\end{displaymath}

where [[MathCmd 205]] is a vector positive positive values.

Given these assumptions, it can be proven that f is a concave function and therefore the dual geometric optimization problem can be solved using MOSEK.

For a thorough discussion of geometric optimization see [18, pp. 531-538].

We will introduce the following definitions:

\begin{displaymath}\nonumber{}x^{i}:=\left[\begin{array}{c}\nonumber{}x_{{p_{i}}}\\\nonumber{}x_{{p_{i}+1}}\\\nonumber{}\vdots \\\nonumber{}x_{{p_{{i+1}}-1}}\end{array}\right],\quad{}X^{i}:=\mbox{diag}(x^{i}),\quad{}\mbox{and}\quad{}e^{i}:=\left[\begin{array}{c}\nonumber{}1\\\nonumber{}\vdots \\\nonumber{}1\end{array}\right]\in{}\mathbb{R}^{{p_{{i+1}}-p_{i}}}.\end{displaymath}

which make it possible to state f on the form

\begin{displaymath}\nonumber{}f(x)=\sum \limits _{{j=0}}^{{n-1}}x_{j}\ln \left(\frac{v_{j}}{x_{j}}\right)+\sum \limits _{{i=1}}^{t}((e^{i})^{T}x^{i})\ln ((e^{i})^{T}x^{i}).\end{displaymath}

Furthermore, we have that

\begin{displaymath}\nonumber{}\nabla f(x)=\left[\begin{array}{c}\nonumber{}\ln (v_{0})-1-\ln (x_{0})\\\nonumber{}\vdots \\\nonumber{}\ln (v_{j})-1-\ln (x_{j})\\\nonumber{}\vdots \\\nonumber{}\ln (v_{{n-1}})-1-\ln (x_{{n-1}})\end{array}\right]+\left[\begin{array}{c}\nonumber{}0e^{0}\\\nonumber{}(1+\ln ((e^{1})^{T}x^{1}))e^{1}\\\nonumber{}\vdots \\\nonumber{}(1+\ln ((e^{i})^{T}x^{i}))e^{i}\\\nonumber{}\vdots \\\nonumber{}(1+\ln ((e^{t})^{T}x^{t}))e^{t}\end{array}\right]\end{displaymath}

and

\begin{displaymath}\nonumber{}\begin{array}{c}\nonumber{}\nabla ^{2}f(x)=\\\nonumber{}\left[\begin{array}{ccccc}\nonumber{}-(X^{0})^{{-1}} & 0 & 0 & \cdots  & 0\\\nonumber{}0 & \frac{e^{1} (e^{1})^{T}}{(e^{1})^{T} x^{1}}-(X^{1})^{{-1}} & 0 & \cdots  & 0\\\nonumber{}0 & 0 & \ddots  & \cdots  & 0\\\nonumber{}\vdots  & \vdots  & \vdots  & \ddots  & \vdots \\\nonumber{}0 & 0 & 0 & 0 & \frac{e^{t} (e^{t})^{T}}{(e^{t})^{T} x^{t}}-(X^{t})^{{-1}}\end{array}\right].\end{array}\end{displaymath}

Please note that the Hessian is a block diagonal matrix and, especially if t is large, it is very sparse — MOSEK will automatically exploit these features to speed up computations. Moreover, the Hessian can be computed cheaply, specificly in

\begin{displaymath}\nonumber{}O\left(\sum \limits _{{i=0}}^{t}(p_{{i+1}}-p_{i})^{2}\right)\end{displaymath}

operations.

6.4.2. A numerical example

In the following we will use the data

\begin{displaymath}\nonumber{}A=\left[\begin{array}{ccccc}\nonumber{}-1 & 1 & 1 & -2 & 0\\\nonumber{}-0.5 & 0 & 1 & -2 & 0.5\\\nonumber{}-1 & 1 & 1 & 0 & -1\\\nonumber{}1 & 1 & 1 & 0 & 0\end{array}\right],\quad{}b=\left[\begin{array}{c}\nonumber{}0\\\nonumber{}0\\\nonumber{}0\\\nonumber{}1\end{array}\right],\quad{}v=\left[\begin{array}{c}\nonumber{}40\\\nonumber{}20\\\nonumber{}40\\\nonumber{}\frac{1}{3}\end{array}\right]\end{displaymath}

and the function f given by

\begin{displaymath}\nonumber{}\begin{array}{rcl}\nonumber{}f(x) & = & \sum \limits _{{j=0}}^{4}x_{j}\ln \left(\frac{v_{j}}{x_{j}}\right)\\\nonumber{} &  & +(x_{3}+x_{4})\ln (x_{3}+x_{4})\end{array}\end{displaymath}

for demonstration purposes.

6.4.3. dgopt: A program for dual geometric optimization

The generic dual geometric optimization problem and a numerical example have been presented and we will now develop a program which can solve the dual geometric optimization problem using the MOSEK API.

6.4.3.1. Data input

The first problem is how to feed the problem data into MOSEK. Since the constraints of the optimization problem are linear, they can be specified fully using an MPS file as in the purely linear case. The MPS file for the numerical data above will look as follows:

NAME
ROWS
 N  obj
 E  c1      
 E  c2      
 E  c3      
 E  c4      
COLUMNS
    x1        obj       0
    x1        c1        -1
    x1        c2        -0.5
    x1        c3        -1
    x1        c4        1
    x2        obj       0
    x2        c1        1
    x2        c3        1
    x2        c4        1
    x3        obj       0
    x3        c1        1
    x3        c2        1
    x3        c3        1
    x3        c4        1
    x4        obj       0
    x4        c1        -2
    x4        c2        -2
    x5        obj       0
    x5        c2        0.5
    x5        c3        -1
RHS
    rhs       c4        1
RANGES
BOUNDS
ENDATA

Moreover, a file specifying f is required so for that purpose we define a file:

\begin{displaymath}\nonumber{}\begin{array}{c}\nonumber{}t\\\nonumber{}v_{0}\\\nonumber{}v_{1}\\\nonumber{}\vdots \\\nonumber{}v_{{n-1}}\\\nonumber{}p_{1}-p_{0}\\\nonumber{}p_{2}-p_{1}\\\nonumber{}\vdots \\\nonumber{}p_{t}-p_{{t-1}}\end{array}\end{displaymath}

Hence, for the numerical example this file has the format:

2
40.0
20.0
40.0
0.33333333333333
1.33333333333333
3
2

6.4.3.2. Solving the numerical example

The example is solved by executing the command line

mskdgopt examp/data/dgo.mps examples/data/dgo.f

6.4.4. The source code: dgopt

The source code for the dgopt consists of the files:

  • dgopt.h and dgopt.c: Functions for reading and solving the dual geometric optimization problem.
  • mskdgopt.c : The command line interface.

These files are available in the MOSEK distribution in the directory:

mosek/6/tools/examples/c

The basic functionality of dgopt can be gathered by studying the function main in mskdgopt.c. This function first loads the linear part of the problem from an MPS file into the task. Next, the nonlinear part of the problem is read from a file with the function MSK_dgoptread. Finally, the nonlinear function is created and inputted with MSK_dgoptsetup and the problem is solved. The solution is written to the file dgopt.sol.

The following functions in dgopt.c are used to set up the information about the evaluation of the nonlinear objective function:

MSK_dgoread

The purpose of this function is to read data from a file which specifies the nonlinear function f in the objective.

MSK_dgosetup

This function creates the problem in the task. The information parsed to the function is stored in a data structure called nlhandt, defined in the program. This structure is later passed to the functions gostruc and goeval which are used to compute the gradient and the Hessian of f.

gostruc

This function is a call-back function used by MOSEK. The function reports structural information about f such as the number of non-zeros in the Hessian and the sparsity pattern of the Hessian.

goeval

This function is a call-back function used by MOSEK. It reports numerical information about f such as the objective value and gradient for a particular x value.

6.5. Linear network flow problems

Network flow problems are a special class of linear optimization problems which has many applications. A network consists of a set of points connected by a set of lines. Usually the points and lines are called nodes and arcs. Arcs may have an direction on them. The network is directed if all arcs are directed. The class of network flow problems is defined as follows.
Let [[MathCmd 214]] be a directed network of nodes [[MathCmd 215]] and arcs [[MathCmd 216]]. Associated with every arc [[MathCmd 217]] is a cost [[MathCmd 218]] and a capacity [[MathCmd 219]]. Moreover, associated with each node [[MathCmd 220]] in the network is a lower limit [[MathCmd 221]] and an upper limit [[MathCmd 222]] on the demand (supply) of the node. The minimum cost of a network flow problem can be stated as follows:

\begin{math}\nonumber{}\begin{array}{lcccccl}\nonumber{}\mbox{minimize} &  &  & \sum \limits _{{(i,j)\in{}\mathcal{A}}}c_{{ij}}x_{{ij}} &  &  & \\\nonumber{}\mbox{subject to} & l^{c}_{{i}} & \leq{} & \sum \limits _{{\lbrace{}j:(i,j)\in{}\mathcal{A}\rbrace{}}}x_{{ij}}-\sum \limits _{{\lbrace{}j:(j,i)\in{}\mathcal{A}\rbrace{}}}x_{{ji}} & \leq{} & u^{c}_{{i}} & \forall i\in{}\mathcal{N},\\\nonumber{} & l^{x}_{{ij}} & \leq{} & x_{{ij}} & \leq{} & u^{x}_{{ij}} & \forall (i,j)\in{}\mathcal{A}.\end{array}\end{math} (6.5.1)

A classical example of a network flow problem is the transportation problem where the objective is to distribute goods from warehouses to customers at lowest possible total cost, see [7] for a detailed application reference.

The above graph formulation of the network flow problem implies the structural properties. Each variable appears in exactly two constraints with a numerical value of either [[MathCmd 224]] or [[MathCmd 225]].

It is well-known that problems with network flow structure can be solved efficiently with a specialized version of the simplex method. MOSEK includes such a network simplex implementation which can be called either directly using MSK_netoptimize or indirectly by letting the standard simplex optimizer extract the embedded network. This section shows how to solve a network problem by a direct call to MSK_netoptimize. For further details on how to exploit embedded network in the standard simplex optimizer, see Section 8.3.1.

6.5.1. A linear network flow problem example

The following is an example of a linear network optimization problem:

\begin{math}\nonumber{}\begin{array}{lccccccccccccl}\nonumber{}\mbox{maximize} & x_{0} &  &  & + & x_{2} & + &  & - & x_{4} & + & x_{5} &  & \\\nonumber{}\mbox{subject to} & -x_{0} &  &  &  &  & + & x_{3} &  &  &  &  & = & 1,\\\nonumber{} &  &  &  &  & x_{2} & - & x_{3} & + & x_{4} & + & x_{5} & = & -2,\\\nonumber{} & x_{0} & - & x_{1} &  &  &  &  & - & x_{4} & - & x_{5} & = & 0,\\\nonumber{} &  &  & x_{1} & - & x_{2} & + &  &  &  &  &  & = & 0,\end{array}\end{math} (6.5.2)

having the bounds [[MathCmd 227]].

The corresponding graph [[MathCmd 214]] is displayed in fig.6.1.

Figure 6.1: Simple network.

6.5.1.1. Source code

In this section we will show how to solve (6.5.2) with the network optimizer.

The C program included below, which solves this problem, is distributed with MOSEK and can be found in the directory

  mosek\6\tools\examples\c
#include <stdio.h> #include <stdlib.h> #include <string.h> #include "mosek.h" /* Demonstrates a simple use of the network optimizer. Purpose: 1. Specify data for a network. 2. Solve the network problem with the network optimizer. */ /* Network sizes */ #define NUMCON 4 /* Nodes in network */ #define NUMVAR 6 /* Arcs in network */ void MSKAPI printlog(void *ptr, char s[]) { printf("%s",s); } /* printlog */ /* Main function */ int main(int argc, char *argv[]) { MSKrescodee r; MSKenv_t env; MSKtask_t dummytask=NULL; MSKidxt from[] = {0,2,3,1,1,1}, to[] = {2,3,1,0,2,2}; MSKrealt cc[] = {0.0,0.0}, cx[] = {1.0,0.0,1.0,0.0,-1.0,1.0}, blc[] = {1.0,1.0,-2.0,0.0}, buc[] = {1.0,1.0,-2.0,0.0}, blx[] = {0.0,0.0,0.0,0.0,0.0,0.0}, bux[] = {MSK_INFINITY,MSK_INFINITY,MSK_INFINITY, MSK_INFINITY,MSK_INFINITY,MSK_INFINITY}, xc[NUMCON],xx[NUMVAR],y[NUMCON],slc[NUMCON], suc[NUMCON],slx[NUMVAR],sux[NUMVAR]; MSKboundkeye bkc[] = {MSK_BK_FX,MSK_BK_FX,MSK_BK_FX,MSK_BK_FX}, bkx[] = {MSK_BK_LO,MSK_BK_LO,MSK_BK_LO,MSK_BK_LO, MSK_BK_LO,MSK_BK_LO}; MSKstakeye skc[NUMCON],skx[NUMVAR]; MSKprostae prosta; MSKsolstae solsta; int i,j; r = MSK_makeenv(&env,NULL,NULL,NULL,NULL); if ( r==MSK_RES_OK ) MSK_linkfunctoenvstream(env,MSK_STREAM_LOG,NULL,printlog); if ( r==MSK_RES_OK ) r = MSK_initenv(env); if ( r==MSK_RES_OK ) { /* Create an optimization task. Will be used as a dummy task in MSK_netoptimize, parameters can be set here */ r = MSK_maketask(env,0,0,&dummytask); } if ( r==MSK_RES_OK ) { MSK_linkfunctotaskstream(dummytask, MSK_STREAM_LOG, NULL, printlog); } if ( r==MSK_RES_OK ) { r = MSK_putobjsense(dummytask, MSK_OBJECTIVE_SENSE_MAXIMIZE); } if ( r==MSK_RES_OK ) { /* Solve network problem with a direct call into the network optimizer */ r = MSK_netoptimize(dummytask, NUMCON, NUMVAR, cc, cx, bkc, blc, buc, bkx, blx, bux, from, to, &prosta, &solsta, 0, skc, skx, xc, xx, y, slc, suc, slx, sux); if ( solsta == MSK_SOL_STA_OPTIMAL ) { printf("Network problem is optimal\n"); printf("Primal solution is :\n"); for( i = 0; i < NUMCON; ++i ) printf("xc[%d] = %-16.10e\n",i,xc[i]); for( j = 0; j < NUMVAR; ++j ) printf("Arc(%d,%d) -> xx[%d] = %-16.10e\n",from[j],to[j],j,xx[j]); } else if ( solsta == MSK_SOL_STA_PRIM_INFEAS_CER ) { printf("Network problem is primal infeasible\n"); } else if ( solsta == MSK_SOL_STA_DUAL_INFEAS_CER ) { printf("Network problem is dual infeasible\n"); } else { printf("Network problem solsta : %d\n",solsta); } } MSK_deletetask(&dummytask); MSK_deleteenv(&env); }

6.5.1.2. Example code comments

There are a few important differences between the linear network optimization example in section 6.5.1.1 and the general linear optimization problem in section 5.2.

  • MOSEK allows that network problems can be inputted and optimized using one function call to the function MSK_netoptimize. This is more efficient and uses less memory than a call to the standard optimizer.
  • Since we know that each column of matrix A has two non-zeroes, it can be stored in two arrays, from and to, specifying the origin and destination of the arcs (variables), see graph in fig.fig-network.
  • The solution is written directly to skc, skx, xc, xx, y, slc, suc, slx and sux by MSK_netoptimize.

6.6. Embedded network flow problems

Often problems contains both large parts with network structure and some non-network constraints or variables — such problems are said to have embedded network structure.

A linear optimization with embedded network structure problem can be written as :

\begin{math}\nonumber{}\begin{array}{lccccl}\nonumber{}\mbox{minimize} &  &  & c^{T}x+c^{f} &  & \\\nonumber{}\mbox{subject to} & l^{c}_{N} & \leq{} & Nx & \leq{} & u^{c}_{N},\\\nonumber{} & l^{c} & \leq{} & Ax & \leq{} & u^{c},\\\nonumber{} & l^{x} & \leq{} & x & \leq{} & u^{x},\end{array}\end{math} (6.6.1)

Where the constraints

\begin{math}\nonumber{}\begin{array}{lcccl}\nonumber{}l^{c}_{N} & \leq{} & Nx & \leq{} & u^{c}_{N}\end{array}\end{math} (6.6.2)

defines a network as explained in section 6.5, and the constraints

\begin{math}\nonumber{}\begin{array}{lcccl}\nonumber{}l^{c} & \leq{} & Ax & \leq{} & u^{c}\end{array}\end{math} (6.6.3)

defines the general non-network linear constraints. As an example consider the small linear optimization problem

\begin{math}\nonumber{}\begin{array}{lccccccccccccl}\nonumber{}\mbox{maximize} & -x_{0} &  &  & + & x_{2} &  &  & - & x_{4} & + & x_{5} &  & \\\nonumber{}\mbox{subject to} & 0.50x_{0} &  &  &  &  & + & 0.50x_{3} &  &  &  &  & = & 0.5,\\\nonumber{} &  &  &  &  & 0.50x_{2} & - & 0.50x_{3} & + & 0.50x_{4} & + & 0.50x_{5} & = & -1,\\\nonumber{} & -0.25x_{0} & + & -2.50x_{1} & + &  &  &  & - & 0.25x_{4} & - & 0.25x_{5} & = & 0,\\\nonumber{} &  &  & 2.50x_{1} & - & 0.25x_{2} &  &  &  &  &  &  & = & 0,\\\nonumber{} &  & - & x_{1} & + & x_{2} & + & x_{3} &  &  & + & x_{5} & \geq{} & 6,\end{array}\end{math} (6.6.4)

with the bounds

\begin{displaymath}\nonumber{}-\infty \leq{}x_{0}\leq{}0,0\leq{}x_{j}\leq{}\infty \mbox{ for }j=1\ldots 5.\end{displaymath}

Recalling the network flow problem structural properties from section 6.5.1, each variable should appear in exactly two constraints with coefficients of either [[MathCmd 224]] or [[MathCmd 225]].

At first glance it does not seem to contain any network structure, but if we scale constraints 1-4 by respectively 2.0, 2.0, 4.0, 4.0 and columns 1-2 by -1.0, 0.1 we get the following problem :

\begin{math}\nonumber{}\begin{array}{lccccccccccccl}\nonumber{}\mbox{maximize} & x_{0} &  &  & + & x_{2} & + &  & - & x_{4} & + & x_{5} &  & \\\nonumber{}\mbox{subject to} & -x_{0} &  &  &  &  & + & x_{3} &  &  &  &  & = & 1,\\\nonumber{} &  &  &  &  & x_{2} & - & x_{3} & + & x_{4} & + & x_{5} & = & -2,\\\nonumber{} & x_{0} & - & x_{1} &  &  &  &  & - & x_{4} & - & x_{5} & = & 0,\\\nonumber{} &  &  & x_{1} & - & x_{2} & + &  &  &  &  &  & = & 0,\\\nonumber{} &  &  & x_{1} & + & x_{2} & + & x_{3} &  &  & + & x_{5} & \geq{} & 6,\end{array}\end{math} (6.6.5)

with the bounds

\begin{displaymath}\nonumber{}0\leq{}x_{j}\leq{}\infty \mbox{ for }j=0\ldots 5.\end{displaymath}

This corresponds to the network flow problem in section 6.5.1 plus one extra non-network constraint. We cannot use the network optimizer directly on the above problem since the last constraint destroys the network property. Finding the largest possible network structure in a linear optimization problem is computationally difficult, so MOSEK offers a heuristic MSK_netextraction that attempts to find suitable scaling factors maximizing numbers of network constraints and variables. Assuming that the embedded network structure is dominant and the problem has few non-network constraints, we can exploit this structure and potentially speed up the optimization. Since the network constraints can be handled efficiently by the specialized network optimizer, the following idea is used:

An embedded network can be exploited by this scheme in two ways:

The first method is more difficult than the second, but also offers much more flexibility. In 6.6.1 the first method is demonstrated by a code example below. For further details on exploiting embedded network structure in the standard simplex optimizer, see section 8.3.1.

6.6.1. Example: Exploit embedded network flow structure in the simplex optimizer

MOSEK is distributed with some network examples which can be found in the directory

  mosek\6\tools\examples 

The example given in this section demonstrates how to extract and optimize embedded network structure in a arbitrary linear optimization problem. The following idea is used

  • Read an arbitrary linear optimization problem into a task.
  • Use the MSK_netextraction function to extract embedded network structure.
  • Optimize the network problem using the MSK_netoptimize function.
#include <stdio.h> #include <stdlib.h> #include <string.h> #include "mosek.h" /* Demonstrates a simple use of network structure in a model. Purpose: 1. Read an optimization problem from an user specified MPS file. 2. Extract the embedded network. 3. Solve the embedded network with the network optimizer. Note that the general simplex optimizer called though MSK_optimize can also extract embedded network and solve it with the network optimizer. The direct call to the network optimizer, which is demonstrated here, is offered as an option to save memory and overhead when solving either many or large network problems. */ /* Helper functions */ void addext(char filename[], char extension[]) { char *cptr; cptr = strrchr(filename,'.'); if ( cptr && cptr[1]!='\\' ) strcpy(cptr+1,extension); else { strcat(filename,"."); strcat(filename,extension); } } /* addext */ void MSKAPI printlog(void *ptr, char s[]) { printf("%s",s); } /* printlog */ /* Main function */ int main(int argc, char *argv[]) { MSKrescodee r; MSKenv_t env; MSKtask_t task=NULL,dummytask=NULL; MSKintt numcon=0,numvar=0,numnetcon,numnetvar; MSKidxt *netcon,*netvar,*from,*to; MSKrealt *scalcon,*scalvar,*cc,*cx,*blc,*buc,*blx,*bux, *xc,*xx,*y,*slc,*suc,*slx,*sux; MSKboundkeye *bkc,*bkx; MSKstakeye *skc,*skx; MSKprostae prosta; MSKsolstae solsta; int i,j,k,*rmap,*cmap,hotstart=0; char filename[1024]; if ( argc<2 ) { printf("No input file specified\n"); exit(0); } else printf("Inputfile: %s\n",argv[1]); r = MSK_makeenv(&env,NULL,NULL,NULL,NULL); if ( r==MSK_RES_OK ) MSK_linkfunctoenvstream(env,MSK_STREAM_LOG,NULL,printlog); if ( r==MSK_RES_OK ) r = MSK_initenv(env); if ( r==MSK_RES_OK ) { strcpy(filename,argv[1]); addext(filename,"log"); /* Create an (empty) optimization task. */ r = MSK_maketask(env,0,0,&task); if ( r==MSK_RES_OK ) { MSK_linkfunctotaskstream(task, MSK_STREAM_LOG, NULL, printlog); MSK_linkfiletotaskstream(task, MSK_STREAM_LOG, filename, 0); } if ( r==MSK_RES_OK ) { r = MSK_readdata(task,argv[1]); } } if ( r==MSK_RES_OK ) { r = MSK_getnumcon(task,&numcon); } if ( r==MSK_RES_OK ) { r = MSK_getnumvar(task,&numvar); } if ( r==MSK_RES_OK ) { /* Create an (empty) optimization task. Will be used as a dummy task in MSK_netoptimize, parameters can be set here */ r = MSK_maketask(env,0,0,&dummytask); } if ( r==MSK_RES_OK ) { MSK_linkfunctotaskstream(dummytask, MSK_STREAM_LOG, NULL, printlog); MSK_linkfiletotaskstream(dummytask, MSK_STREAM_LOG, filename, 0); } /* Allocate memory for embedded network (maximum sizez => (numcon and numvar) */ if ( r==MSK_RES_OK ) { /* Sizes of the embedded network structure is unknown use maximum size required by MSK_networkextraction */ rmap = MSK_calloctask(dummytask,numcon,sizeof(int)); cmap = MSK_calloctask(dummytask,numvar,sizeof(int)); netcon = MSK_calloctask(dummytask,numcon,sizeof(MSKidxt)); netvar = MSK_calloctask(dummytask,numvar,sizeof(MSKidxt)); from = MSK_calloctask(dummytask,numvar,sizeof(MSKidxt)); to = MSK_calloctask(dummytask,numvar,sizeof(MSKidxt)); scalcon = MSK_calloctask(dummytask,numcon,sizeof(MSKrealt)); scalvar = MSK_calloctask(dummytask,numvar,sizeof(MSKrealt)); cc = MSK_calloctask(dummytask,numcon,sizeof(MSKrealt)); cx = MSK_calloctask(dummytask,numvar,sizeof(MSKrealt)); blc = MSK_calloctask(dummytask,numcon,sizeof(MSKrealt)); buc = MSK_calloctask(dummytask,numcon,sizeof(MSKrealt)); blx = MSK_calloctask(dummytask,numvar,sizeof(MSKrealt)); bux = MSK_calloctask(dummytask,numvar,sizeof(MSKrealt)); xx = MSK_calloctask(dummytask,numvar,sizeof(MSKrealt)); xc = MSK_calloctask(dummytask,numcon,sizeof(MSKrealt)); y = MSK_calloctask(dummytask,numcon,sizeof(MSKrealt)); slc = MSK_calloctask(dummytask,numcon,sizeof(MSKrealt)); suc = MSK_calloctask(dummytask,numcon,sizeof(MSKrealt)); slx = MSK_calloctask(dummytask,numvar,sizeof(MSKrealt)); sux = MSK_calloctask(dummytask,numvar,sizeof(MSKrealt)); bkc = MSK_calloctask(dummytask,numcon,sizeof(MSKboundkeye)); bkx = MSK_calloctask(dummytask,numvar,sizeof(MSKboundkeye)); skc = MSK_calloctask(dummytask,numvar,sizeof(MSKstakeye)); skx = MSK_calloctask(dummytask,numvar,sizeof(MSKstakeye)); if( !( rmap && cmap && netcon && netvar && from && to && scalcon && scalvar && cc && cx && blc && buc && blx && bux && xx && xc && y && slc && suc && slx && sux && bkc && bkx && skc && skx ) ) { r = MSK_RES_ERR_SPACE; } else { /* We just use zero cost on slacks */ for( i = 0; i < numcon; ++i ) cc[i] = 0.0; } } if ( r==MSK_RES_OK ) { /* Extract embedded network */ r = MSK_netextraction(task, &numnetcon, &numnetvar, netcon, netvar, scalcon, scalvar, cx, bkc, blc, buc, bkx, blx, bux, from, to); MSK_deletetask(&task); } if ( r==MSK_RES_OK ) { /* Solve embedded network with a direct call into the network optimizer */ r = MSK_netoptimize(dummytask, numnetcon, numnetvar, cc, cx, bkc, blc, buc, bkx, blx, bux, from, to, &prosta, &solsta, hotstart, skc, skx, xc, xx, y, slc, suc, slx, sux); if ( solsta == MSK_SOL_STA_OPTIMAL ) { printf("Embedded network problem is optimal\n"); } else if ( solsta == MSK_SOL_STA_PRIM_INFEAS_CER ) { printf("Embedded network problem is primal infeasible\n"); } else if ( solsta == MSK_SOL_STA_DUAL_INFEAS_CER ) { printf("Embedded network problem is dual infeasible\n"); } else { printf("Embedded network problem solsta : %d\n",solsta); } } /* Free allocated memory */ MSK_freetask(dummytask,rmap); MSK_freetask(dummytask,cmap); MSK_freetask(dummytask,netcon); MSK_freetask(dummytask,netvar); MSK_freetask(dummytask,from); MSK_freetask(dummytask,to); MSK_freetask(dummytask,scalcon); MSK_freetask(dummytask,scalvar); MSK_freetask(dummytask,cc); MSK_freetask(dummytask,cx); MSK_freetask(dummytask,blc); MSK_freetask(dummytask,buc); MSK_freetask(dummytask,blx); MSK_freetask(dummytask,bux); MSK_freetask(dummytask,xx); MSK_freetask(dummytask,xc); MSK_freetask(dummytask,y); MSK_freetask(dummytask,slc); MSK_freetask(dummytask,suc); MSK_freetask(dummytask,slx); MSK_freetask(dummytask,sux); MSK_freetask(dummytask,bkc); MSK_freetask(dummytask,bkx); MSK_freetask(dummytask,skc); MSK_freetask(dummytask,skx); MSK_deletetask(&dummytask); MSK_deleteenv(&env); }

In the above example we only optimize the embedded network problem. We still need to use the found network solution as a hot-start for the simplex optimizer and solve the original problem. This involves unscaling the network solution back to same unit measure as the original problem. In the example

  mosek\6\tools\examples\c\network3.c 

we show how to convert the network solution into a valid hot-start for the simplex optimizer.

6.7. Solving linear systems involving the basis matrix

A linear optimization problem always has an optimal solution which is also a basic solution. In an optimal basic solution there are exactly m basic variables where m is the number of rows in the constraint matrix A. Define

\begin{displaymath}\nonumber{}B\in{}\mathbb{R}^{{m\times m}}\end{displaymath}

as a matrix consisting of the columns of A corresponding to the basic variables.

The basis matrix B is always non-singular, i.e.

\begin{displaymath}\nonumber{}\det (B)\not=0\end{displaymath}

or equivalently that [[MathCmd 240]] exists. This implies that the linear systems

\begin{math}\nonumber{}B\bar{x}=w\end{math} (6.7.1)

and

\begin{math}\nonumber{}B^{T}\bar{x}=w\end{math} (6.7.2)

each has a unique solution for all w.

MOSEK provides functions for solving the linear systems (6.7.1) and (6.7.2) for an arbitrary w.

6.7.1. Identifying the basis

To use the solutions to (6.7.1) and (6.7.2) it is important to know how the basis matrix B is constructed.

Internally MOSEK employs the linear optimization problem

\begin{math}\nonumber{}\begin{array}{lccccl}\nonumber{}\mbox{maximize} &  &  & c^{T}x &  & \\\nonumber{}\mbox{subject to} &  &  & Ax-x^{c} & = & 0\\\nonumber{} & l^{x} & \leq{} & x & \leq{} & u^{x},\\\nonumber{} & l^{c} & \leq{} & x^{c} & \leq{} & u^{c}.\end{array}\end{math} (6.7.3)

where

\begin{displaymath}\nonumber{}x^{c}\in{}\mathbb{R}^{{m}}\mbox{ and }x\in{}\mathbb{R}^{n}.\end{displaymath}

The basis matrix is constructed of m columns taken from

\begin{displaymath}\nonumber{}[\begin{array}{cc}\nonumber{}A & -I\end{array}].\end{displaymath}

If variable [[MathCmd 144]] is a basis variable, then the j'th column of A denoted [[MathCmd 247]] will appear in B. Similarly, if [[MathCmd 248]] is a basis variable, then the i'th column of -I will appear in the basis. The ordering of the basis variables and therefore the ordering of the columns of B is arbitrary. The ordering of the basis variables may be retrieved by calling the function:

MSK_initbasissolve (MSKtask_t task
                    MSKidxt   *basis);

This function initializes data structures for later use and returns the indexes of the basic variables in the array basis. The interpretation of the basis is as follows. If

\begin{displaymath}\nonumber{}\mathtt{basis}[i]<\mathtt{numcon},\end{displaymath}

then the i'th basis variable is [[MathCmd 248]]. Moreover, the i'th column in B will be the i'th column of -I. On the other hand if

\begin{displaymath}\nonumber{}\mathtt{basis}[i]\geq{}\mathtt{numcon},\end{displaymath}

then the i'th basis variable is variable

\begin{displaymath}\nonumber{}x_{{\mathtt{basis}[i]-\mathtt{numcon}}}\end{displaymath}

and the i'th column of B is the column

\begin{displaymath}\nonumber{}A_{{:,(\mathtt{basis}[i]-\mathtt{numcon})}}.\end{displaymath}

For instance if [[MathCmd 254]] and [[MathCmd 255]], then since [[MathCmd 256]], the first basis variable is [[MathCmd 257]]. Therefore, the first column of B is the fourth column of -I. Similarly, if [[MathCmd 258]], then the second variable in the basis is [[MathCmd 259]]. Hence, the second column of B is identical to [[MathCmd 260]].

6.7.2. An example

Consider the linear optimization problem:

\begin{math}\nonumber{}\begin{array}{lccl}\nonumber{}\mbox{minimize} & x_{0}+x_{1} &  & \\\nonumber{}\mbox{subject to} & x_{0}+2x_{1} & \leq{} & 2,\\\nonumber{} & x_{0}+x_{1} & \leq{} & 6,\\\nonumber{} & x_{0},x_{1}\geq{}0. &  &\end{array}\end{math} (6.7.4)

Suppose a call to MSK_initbasissolve returns an array basis so that

basis[0] = 1,
basis[1] = 2.

Then the basis variables are [[MathCmd 262]] and [[MathCmd 17]] and the corresponding basis matrix B is

\begin{math}\nonumber{}\left[\begin{array}{cc}\nonumber{}0 & 1\\\nonumber{}-1 & 1\end{array}\right].\end{math} (6.7.5)

Please note the ordering of the columns in B.

The following program demonstrates the use of MSK_solvewithbasis.

/* Copyright: Copyright (c) 1998-2011 MOSEK ApS, Denmark. All rights reserved. File: solvebasis.c Purpose: To demonstrate the usage of MSK_solvewithbasis on the problem: maximize x0 + x1 st. x0 + 2.0 x1 <= 2 x0 + x1 <= 6 x0 >= 0, x1>= 0 The problem has the slack variables xc0, xc1 on the constraints and the variables x0 and x1. maximize x0 + x1 st. x0 + 2.0 x1 -xc1 = 2 x0 + x1 -xc2 = 6 x0 >= 0, x1>= 0, xc1 <= 0 , xc2 <= 0 problem data is read from basissolve.lp. Syntax: solvebasis basissolve.lp */ #include "mosek.h" static void MSKAPI printstr(void *handle, char str[]) { printf("%s",str); } /* printstr */ int main(int argc,char **argv) { MSKenv_t env; MSKtask_t task; MSKintt NUMCON = 2; MSKintt NUMVAR = 2; double c[] = {1.0, 1.0}; MSKintt ptrb[] = {0, 2}; MSKintt ptre[] = {2, 3}; MSKidxt asub[] = {0, 1, 0, 1}; double aval[] = {1.0, 1.0, 2.0, 1.0}; MSKboundkeye bkc[] = {MSK_BK_UP, MSK_BK_UP}; double blc[] = {-MSK_INFINITY, -MSK_INFINITY}; double buc[] = {2.0, 6.0}; MSKboundkeye bkx[] = {MSK_BK_LO, MSK_BK_LO}; double blx[] = {0.0, 0.0}; double bux[] = {+MSK_INFINITY, +MSK_INFINITY}; MSKrescodee r = MSK_RES_OK; MSKidxt i,nz; double w1[] = {2.0,6.0}; double w2[] = {1.0,0.0}; MSKidxt sub[] = {0,1}; MSKidxt *basis; if (r == MSK_RES_OK) r = MSK_makeenv(&env,NULL,NULL,NULL,NULL); if ( r==MSK_RES_OK ) MSK_linkfunctoenvstream(env,MSK_STREAM_LOG,NULL,printstr); if ( r==MSK_RES_OK ) r = MSK_initenv(env); if ( r==MSK_RES_OK ) r = MSK_makeemptytask(env,&task); if ( r==MSK_RES_OK ) MSK_linkfunctotaskstream(task,MSK_STREAM_LOG,NULL,printstr); if ( r == MSK_RES_OK) r = MSK_inputdata(task, NUMCON,NUMVAR, NUMCON,NUMVAR, c, 0.0, ptrb, ptre, asub, aval, bkc, blc, buc, bkx, blx, bux); if (r == MSK_RES_OK) r = MSK_putobjsense(task,MSK_OBJECTIVE_SENSE_MAXIMIZE); if (r == MSK_RES_OK) r = MSK_optimize(task); if (r == MSK_RES_OK) basis = MSK_calloctask(task,NUMCON,sizeof(MSKidxt)); if (r == MSK_RES_OK) r = MSK_initbasissolve(task,basis); /* List basis variables corresponding to columns of B */ for (i=0;i<NUMCON && r == MSK_RES_OK;++i) { printf("basis[%d] = %d\n",i,basis[i]); if (basis[sub[i]] < NUMCON) printf ("Basis variable no %d is xc%d.\n",i, basis[i]); else printf ("Basis variable no %d is x%d.\n",i,basis[i] - NUMCON); } nz = 2; /* solve Bx = w1 */ /* sub contains index of non-zeros in w1. On return w1 contains the solution x and sub the index of the non-zeros in x. */ if (r == MSK_RES_OK) r = MSK_solvewithbasis(task,0,&nz,sub,w1); if (r == MSK_RES_OK) { printf("\nSolution to Bx = w1:\n\n"); /* Print solution and b. */ for (i=0;i<nz;++i) { if (basis[sub[i]] < NUMCON) printf ("xc%d = %e\n",basis[sub[i]] , w1[sub[i]] ); else printf ("x%d = %e\n",basis[sub[i]] - NUMCON , w1[sub[i]] ); } } /* Solve B^Tx = c */ nz = 2; sub[0] = 0; sub[1] = 1; if (r == MSK_RES_OK) r = MSK_solvewithbasis(task,1,&nz,sub,w2); if (r == MSK_RES_OK) { printf("\nSolution to B^Tx = w2:\n\n"); /* Print solution and y. */ for (i=0;i<nz;++i) { if (basis[sub[i]] < NUMCON) printf ("xc%d = %e\n",basis[sub[i]] , w2[sub[i]] ); else printf ("x%d = %e\n",basis[sub[i]] - NUMCON , w2[sub[i]] ); } } printf("Return code: %d (0 means no error occurred.)\n",r); return ( r ); }/* main */

In the example above the linear system is solved using the optimal basis for (6.7.4) and the original right-hand side of the problem. Thus the solution to the linear system is the optimal solution to the problem. When running the example program the following output is produced.

basis[0] = 1
Basis variable no 0 is xc1.
basis[1] = 2
Basis variable no 1 is x0.

Solution to Bx = b:

x0 = 2.000000e+00
xc1 = -4.000000e+00

Solution to B^Tx = c:

x1 = -1.000000e+00
x0 = 1.000000e+00

Please note that the ordering of the basis variables is

\begin{displaymath}\nonumber{}\left[\begin{array}{c}\nonumber{}x^{c}_{1}\\\nonumber{}x_{0}\end{array}\right]\end{displaymath}

and thus the basis is given by:

\begin{math}\nonumber{}B=\left[\begin{array}{cc}\nonumber{}0 & 1\\\nonumber{}-1 & 1\end{array}\right]\end{math} (6.7.6)

It can be verified that

\begin{displaymath}\nonumber{}\left[\begin{array}{c}\nonumber{}x^{c}_{1}\\\nonumber{}x_{0}\end{array}\right]=\left[\begin{array}{c}\nonumber{}-4\\\nonumber{}2\end{array}\right]\end{displaymath}

is a solution to

\begin{displaymath}\nonumber{}\left[\begin{array}{cc}\nonumber{}0 & 1\\\nonumber{}-1 & 1\end{array}\right]\left[\begin{array}{c}\nonumber{}x^{c}_{1}\\\nonumber{}x_{0}\end{array}\right]=\left[\begin{array}{c}\nonumber{}2\\\nonumber{}6\end{array}\right].\end{displaymath}

6.7.3. Solving arbitrary linear systems

MOSEK can be used to solve an arbitrary (rectangular) linear system

\begin{displaymath}\nonumber{}Ax=b\end{displaymath}

using the MSK_solvewithbasis function without optimizing the problem as in the previous example. This is done by setting up an A matrix in the task, setting all variables to basic and calling the MSK_solvewithbasis function with the b vector as input. The solution is returned by the function.

Below we demonstrate how to solve the linear system

\begin{math}\nonumber{}\left[\begin{array}{cc}\nonumber{}0 & 1\\\nonumber{}-1 & 1\end{array}\right]\left[\begin{array}{c}\nonumber{}x_{0}\\\nonumber{}x_{1}\end{array}\right]=\left[\begin{array}{c}\nonumber{}b_{1}\\\nonumber{}b_{2}\end{array}\right]\end{math} (6.7.7)

with b=(1,-2) and b=(7,0).

/* Copyright: Copyright (c) 1998-2011 MOSEK ApS, Denmark. All rights reserved. File : solvelinear.c Purpose : To demonstrate the usage of MSK_solvewithbasis to solve the linear system: 1.0 x1 = b1 -1.0 x0 + 1.0 x1 = b2 with two different right hand sides b = (1.0, -2.0) and b = (7.0, 0.0) */ #include "mosek.h" static void MSKAPI printstr(void *handle, char str[]) { printf("%s",str); } /* printstr */ MSKrescodee put_a(MSKtask_t task, double *aval, MSKidxt *asub, MSKidxt *ptrb, MSKidxt *ptre, int numvar, MSKidxt *basis ) { MSKrescodee r = MSK_RES_OK; int i; MSKstakeye *skx = NULL , *skc = NULL; skx = (MSKstakeye *) calloc(numvar,sizeof(MSKstakeye)); if (skx == NULL && numvar) r = MSK_RES_ERR_SPACE; skc = (MSKstakeye *) calloc(numvar,sizeof(MSKstakeye)); if (skc == NULL && numvar) r = MSK_RES_ERR_SPACE; for (i=0;i<numvar && r == MSK_RES_OK;++i) { skx[i] = MSK_SK_BAS; skc[i] = MSK_SK_FIX; } /* Create a coefficient matrix and right hand side with the data from the linear system */ if (r == MSK_RES_OK) r = MSK_append(task,MSK_ACC_VAR,numvar); if (r == MSK_RES_OK) r = MSK_append(task,MSK_ACC_CON,numvar); for (i=0;i<numvar && r == MSK_RES_OK;++i) r = MSK_putavec(task,MSK_ACC_VAR,i,ptre[i]-ptrb[i],asub+ptrb[i],aval+ptrb[i]); for (i=0;i<numvar && r == MSK_RES_OK;++i) r = MSK_putbound(task,MSK_ACC_CON,i,MSK_BK_FX,0,0); for (i=0;i<numvar && r == MSK_RES_OK;++i) r = MSK_putbound(task,MSK_ACC_VAR,i,MSK_BK_FR,-MSK_INFINITY,MSK_INFINITY); /* Allocate space for the solution and set status to unknown */ if (r == MSK_RES_OK) { r = MSK_makesolutionstatusunknown(task, MSK_SOL_BAS); } /* Define a basic solution by specifying status keys for variables & constraints. */ for (i=0; i<numvar && r==MSK_RES_OK;++i) r = MSK_putsolutioni ( task, MSK_ACC_VAR, i, MSK_SOL_BAS, skx[i], 0.0, 0.0, 0.0, 0.0); for (i=0;i<numvar && r == MSK_RES_OK;++i) r = MSK_putsolutioni ( task, MSK_ACC_CON, i, MSK_SOL_BAS, skc[i], 0.0, 0.0, 0.0, 0.0); if (r == MSK_RES_OK) r = MSK_initbasissolve(task,basis); free (skx); free (skc); return ( r ); } #define NUMCON 2 #define NUMVAR 2 int main(int argc,char **argv) { MSKenv_t env; MSKtask_t task; MSKrescodee r = MSK_RES_OK; MSKintt numvar = NUMCON; MSKintt numcon = NUMVAR; /* we must have numvar == numcon */ int i,nz; double aval[] = {-1.0,1.0,1.0}; MSKidxt asub[] = {1,0,1}; MSKidxt ptrb[] = {0,1}; MSKidxt ptre[] = {1,3}; MSKidxt bsub[NUMCON]; double b[NUMCON]; MSKidxt *basis = NULL; if (r == MSK_RES_OK) r = MSK_makeenv(&env,NULL,NULL,NULL,NULL); if ( r==MSK_RES_OK ) MSK_linkfunctoenvstream(env,MSK_STREAM_LOG,NULL,printstr); if ( r==MSK_RES_OK ) r = MSK_initenv(env); if ( r==MSK_RES_OK ) r = MSK_makeemptytask(env,&task); if ( r==MSK_RES_OK ) MSK_linkfunctotaskstream(task,MSK_STREAM_LOG,NULL,printstr); basis = (MSKidxt *) calloc(numcon,sizeof(MSKidxt)); if ( basis == NULL && numvar) r = MSK_RES_ERR_SPACE; /* Put A matrix and factor A. Call this function only once for a given task. */ if (r == MSK_RES_OK) r = put_a( task, aval, asub, ptrb, ptre, numvar, basis ); /* now solve rhs */ b[0] = 1; b[1] = -2; bsub[0] = 0; bsub[1] = 1; nz = 2; if (r == MSK_RES_OK) r = MSK_solvewithbasis(task,0,&nz,bsub,b); if (r == MSK_RES_OK) { printf("\nSolution to Bx = b:\n\n"); /* Print solution and show correspondents to original variables in the problem */ for (i=0;i<nz;++i) { if (basis[bsub[i]] < numcon) printf("This should never happen\n"); else printf ("x%d = %e\n",basis[bsub[i]] - numcon , b[bsub[i]] ); } } b[0] = 7; bsub[0] = 0; nz = 1; if (r == MSK_RES_OK) r = MSK_solvewithbasis(task,0,&nz,bsub,b); if (r == MSK_RES_OK) { printf("\nSolution to Bx = b:\n\n"); /* Print solution and show correspondents to original variables in the problem */ for (i=0;i<nz;++i) { if (basis[bsub[i]] < numcon) printf("This should never happen\n"); else printf ("x%d = %e\n",basis[bsub[i]] - numcon , b[bsub[i]] ); } } free (basis); return r; }

The most important step in the above example is the definition of the basic solution using the MSK_putsolutioni function, where we define the status key for each variable. The actual values of the variables are not important and can be selected arbitrarily, so we set them to zero. All variables corresponding to columns in the linear system we want to solve are set to basic and the slack variables for the constraints, which are all non-basic, are set to their bound.

The program produces the output:

Solution to Bx = b:

x1 = 1
x0 = 3

Solution to Bx = b:

x1 = 7
x0 = 7

and we can verify that [[MathCmd 271]] is indeed a solution to (6.7.7).

6.8. The progress call-back

Some of the API function calls, notably MSK_optimize, may take a long time to complete. Therefore, during the optimization a call-back function is called frequently. From the call-back function it is possible

6.8.1. Source code example

The following source code example documents how the progress call-back function can be used.

#include <stdio.h> #include <stdlib.h> #include <string.h> /* Copyright: Copyright (c) 1998-2011 MOSEK ApS, Denmark. All rights reserved. File: callback.c Purpose: To demonstrate how to use the progress callback. Compile and link the file with MOSE, then it is used as follows: callback psim 25fv47.mps callback dsim 25fv47.mps callback intpnt 25fv47.mps The first argument tells which optimizer to use i.e. psim is primal simplex, dsim is dual simplex and intpnt is interior-point. */ #include "mosek.h" /* Note: This function is declared using MSKAPI, so the correct calling convention is employed. */ static int MSKAPI usercallback(MSKtask_t task, MSKuserhandle_t handle, MSKcallbackcodee caller) { int iter; double pobj,dobj,opttime=0.0,stime=0.0, *maxtime=(double *) handle; switch ( caller ) { case MSK_CALLBACK_BEGIN_INTPNT: printf("Starting interior-point optimizer\n"); break; case MSK_CALLBACK_INTPNT: MSK_getintinf(task, MSK_IINF_INTPNT_ITER, &iter); MSK_getdouinf(task, MSK_DINF_INTPNT_PRIMAL_OBJ, &pobj); MSK_getdouinf(task, MSK_DINF_INTPNT_DUAL_OBJ, &dobj); MSK_getdouinf(task, MSK_DINF_INTPNT_TIME, &stime); MSK_getdouinf(task, MSK_DINF_OPTIMIZER_TIME, &opttime); printf("Iterations: %-3d Time: %6.2f(%.2f) ", iter,opttime,stime); printf("Primal obj.: %-18.6e Dual obj.: %-18.6e\n", pobj,dobj); break; case MSK_CALLBACK_END_INTPNT: printf("Interior-point optimizer finished.\n"); break; case MSK_CALLBACK_BEGIN_PRIMAL_SIMPLEX: printf("Primal simplex optimizer started.\n"); break; case MSK_CALLBACK_UPDATE_PRIMAL_SIMPLEX: MSK_getintinf(task, MSK_IINF_SIM_PRIMAL_ITER, &iter); MSK_getdouinf(task, MSK_DINF_SIM_OBJ, &pobj); MSK_getdouinf(task, MSK_DINF_SIM_TIME, &stime); MSK_getdouinf(task, MSK_DINF_OPTIMIZER_TIME, &opttime); printf("Iterations: %-3d ",iter); printf(" Elapsed time: %6.2f(%.2f)\n", opttime,stime); printf("Obj.: %-18.6e\n",pobj); break; case MSK_CALLBACK_END_PRIMAL_SIMPLEX: printf("Primal simplex optimizer finished.\n"); break; case MSK_CALLBACK_BEGIN_DUAL_SIMPLEX: printf("Dual simplex optimizer started.\n"); break; case MSK_CALLBACK_UPDATE_DUAL_SIMPLEX: MSK_getintinf(task, MSK_IINF_SIM_DUAL_ITER, &iter); MSK_getdouinf(task, MSK_DINF_SIM_OBJ, &pobj); MSK_getdouinf(task, MSK_DINF_SIM_TIME, &stime); MSK_getdouinf(task, MSK_DINF_OPTIMIZER_TIME, &opttime); printf("Iterations: %-3d ",iter); printf(" Elapsed time: %6.2f(%.2f)\n", opttime,stime); printf("Obj.: %-18.6e\n",pobj); break; case MSK_CALLBACK_END_DUAL_SIMPLEX: printf("Dual simplex optimizer finished.\n"); break; case MSK_CALLBACK_BEGIN_BI: printf("Basis identification started.\n"); break; case MSK_CALLBACK_END_BI: printf("Basis identification finished.\n"); break; } if ( opttime>=maxtime[0] ) { /* mosek is spending too much time. Terminate it. */ return ( 1 ); } return ( 0 ); } /* usercallback */ static void MSKAPI printtxt(void *info, char *buffer) { printf("%s",buffer); } /* printtxt */ int main(int argc, char *argv[]) { double maxtime, *xx,*y; int r,j,i,numcon,numvar; FILE *f; MSKenv_t env; MSKtask_t task; if ( argc<3 ) { printf("Too few input arguments. mosek intpnt myfile.mps\n"); exit(0); } /* * It is assumed that we are working in a * windows environment. */ /* Create mosek environment. */ r = MSK_makeenv(&env,NULL,NULL,NULL,NULL); /* Check the return code. */ if ( r==MSK_RES_OK ) r = MSK_initenv(env); /* Check the return code. */ if ( r==MSK_RES_OK ) { /* Create an (empty) optimization task. */ r = MSK_makeemptytask(env,&task); if ( r==MSK_RES_OK ) { MSK_linkfunctotaskstream(task,MSK_STREAM_MSG,NULL, printtxt); MSK_linkfunctotaskstream(task,MSK_STREAM_ERR,NULL, printtxt); } /* Specifies that data should be read from the file argv[2]. */ if ( r==MSK_RES_OK ) r = MSK_readdata(task,argv[2]); if ( r==MSK_RES_OK ) { if ( 0==strcmp(argv[1],"psim") ) MSK_putintparam(task,MSK_IPAR_OPTIMIZER,MSK_OPTIMIZER_PRIMAL_SIMPLEX); else if ( 0==strcmp(argv[1],"dsim") ) MSK_putintparam(task,MSK_IPAR_OPTIMIZER,MSK_OPTIMIZER_DUAL_SIMPLEX); else if ( 0==strcmp(argv[1],"intpnt") ) MSK_putintparam(task,MSK_IPAR_OPTIMIZER,MSK_OPTIMIZER_INTPNT); /* Tell mosek about the call-back function. */ maxtime = 3600; MSK_putcallbackfunc(task, usercallback, (void *) &maxtime); /* Turn all MOSEK logging off. */ MSK_putintparam(task, MSK_IPAR_LOG, 0); r = MSK_optimize(task); MSK_solutionsummary(task,MSK_STREAM_MSG); } MSK_deletetask(&task); } MSK_deleteenv(&env); printf("Return code - %d\n",r); return ( r ); } /* main */

6.9. Customizing the warning and error reporting

You can customize the warning and error reporting in the C API. The MSK_putresponsefunc function can be used to register a user-defined function to be called every time a warning or an error is encountered by MOSEK. This user-defined function will then handle the error/warning as desired.

The following code shows how to define and register an error handling function:

/* Copyright: Copyright (c) 1998-2011 MOSEK ApS, Denmark. All rights reserved. File: errorreporting.c Purpose: To demonstrate how the error reporting can be customized. */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include "mosek.h" MSKrescodee MSKAPI handleresponse(MSKuserhandle_t handle, MSKrescodee r, MSKCONST char msg[]) /* A custom response handler. */ { if ( r==MSK_RES_OK ) { /* Do nothing */ } else if ( r<MSK_FIRST_ERR_CODE ) { printf("MOSEK reports warning number %d: %s\n",r,msg); } else { printf("MOSEK reports error number %d: %s\n",r,msg); } return ( MSK_RES_OK ); } /* handlerespone */ int main(int argc, char *argv[]) { MSKenv_t env; MSKrescodee r; MSKtask_t task; r = MSK_makeenv(&env,NULL,NULL,NULL,NULL); if ( r==MSK_RES_OK ) r = MSK_initenv(env); if ( r==MSK_RES_OK ) { r = MSK_makeemptytask(env,&task); if ( r==MSK_RES_OK ) { /* * Input a custom warning and error handler function. */ MSK_putresponsefunc(task,handleresponse,NULL); /* User defined code goes here */ /* This will provoke an error */ if (r == MSK_RES_OK) r = MSK_putaij(task,10,10,1.0); } MSK_deletetask(&task); } MSK_deleteenv(&env); printf("Return code - %d\n",r); if (r == MSK_RES_ERR_INDEX_IS_TOO_LARGE) return ( MSK_RES_OK); else return (-1); } /* main */

The output from the code above is:

  MOSEK reports error number 1204: The index value 10 occurring in argument 'i' is too large.
Return code - 1204

6.10. Unicode strings

All strings i.e. char * in the C API are assumed to be UTF8 strings. Please note that

For more information about UTF8 encoded strings, please see http://en.wikipedia.org/wiki/UTF-8.

It is possible to convert a wchar_t string to a UTF8 string using the function MSK_wchartoutf8. The inverse function MSK_utf8towchar converts a UTF8 string to a wchar_t string.

6.10.1. A source code example

The example below documents how to convert a wchar_t string to a UTF8 string.

/* Copyright: Copyright (c) 1998-2011 MOSEK ApS, Denmark. All rights reserved. File: unicode.c Purpose: To demonstrate how to use a unicoded strings. */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include "mosek.h" int main(int argc, char *argv[]) { char output[512]; wchar_t *input=L"myfile.mps"; MSKenv_t env; MSKrescodee r; MSKtask_t task; size_t len,conv; r = MSK_makeenv(&env,NULL,NULL,NULL,NULL); if ( r==MSK_RES_OK ) r = MSK_initenv(env); if ( r==MSK_RES_OK ) { r = MSK_makeemptytask(env,&task); if ( r==MSK_RES_OK ) { /* The wchar_t string "input" specifying a file name is converted to a UTF8 string that can be inputted to MOSEK. */ r = MSK_wchartoutf8(sizeof(output),&len,&conv,output,input); if ( r==MSK_RES_OK ) { /* output is now an UTF8 encoded string. */ r = MSK_readdata(task,output); } if ( r==MSK_RES_OK ) { r = MSK_optimize(task); MSK_solutionsummary(task,MSK_STREAM_MSG); } } MSK_deletetask(&task); } MSK_deleteenv(&env); printf("Return code - %d\n",r); return ( r ); } /* main */

6.10.2. Limitations

Please note that the MPS and LP format are based ASCII formats whereas the OPF, MBT, and XML are UTF8 based formats. This implies that problems which contains non-ASCII variable or constraint names cannot be written correctly to an MPS or LP formatted file.

Wed Feb 29 16:08:51 2012