XPRESS

# Introduction

The GAMS/XPRESS solver is based on the XPRESS Optimization Subroutine Library, and runs only in conjunction with the GAMS modeling system. GAMS/XPRESS (also simply referred to as XPRESS) is a versatile, high-performance optimization system. The system integrates:

• a powerful simplex-based LP solver.
• a MIP module with cut generation for integer programming problems.
• a barrier module implementing a state-of-the-art interior point algorithm for very large LP problems.
• a sequential linear programming solver (SLP) for (mixed-integer) nonlinear programs NLP, CNS and MINLP.

The GAMS/XPRESS solver is installed automatically with your GAMS system. Without a license, it will run in student or demonstration mode (i.e. it will solve small models only). If your GAMS license includes XPRESS, there is no size or algorithm restriction imposed by the license, nor is any separate licensing procedure required. XPRESS SLP requires a GAMS license that includes XPRESS for continuous nonlinear models or for mixed-integer nonlinear models. In order to use XPRESS Knitro, the GAMS license must include GAMS/Knitro in addition to the XPRESS license for (mixed-integer) nonlinear models.

# Usage

To explicitly request that a model be solved with XPRESS, insert the statement

option LP = xpress;  { or MIP, RMIP, NLP, CNS, DNLP, RMINLP, MINLP, QCP, MIQCP, or RMIQCP }


somewhere before the solve statement. If XPRESS has been selected as the default solver (e.g. during GAMS installation) for the model type in question, the above statement is not necessary.

The standard GAMS options (e.g. iterlim, optcr) can be used to control XPRESS. For more details, see section Controlling a Solver via GAMS Options. Please note however that - apart from reslim - these are only used for linear and quadratic programs and not the nonlinear solves. Termination conditions for XPRESS SLP can be set in SLP Termination Options.

In addition, XPRESS-specific options can be specified by using a solver option file. While the content of an option file is solver-specific, the details of how to create an option file and instruct the solver to use it are not. This topic is covered in section The Solver Options File.

An example of a valid XPRESS option file is:

   * sample XPRESS options file
algorithm simplex
presolve   0
IterLim    50000


In general this is enough knowledge to solve your models. In some cases you may want to use some of the XPRESS options to gain further performance improvements or for other reasons.

## Linear and Quadratic Programming

The options advBasis, algorithm, basisOut, mpsOutputFile, reform, reRun, and reslim control the behavior of the GAMS/XPRESS link. The options crash, extraPresolve, lpIterlimit, presolve, scaling, threads, and trace set XPRESS library control variables, and can be used to fine-tune XPRESS. See section General LP / MIP / QP Options for more details of XPRESS general options.

### LP

See section LP Options for more details of XPRESS library control variables which can be used to fine-tune the XPRESS LP solver.

### MIP

In some cases, the branch-and-bound MIP algorithm will stop with a proven optimal solution or when unboundedness or (integer) infeasibility is detected. In most cases, however, the global search is stopped through one of the generic GAMS options:

1. iterlim (on the cumulative pivot count) or reslim (in seconds of CPU time),
2. optca & optcr (stopping criteria based on gap between best integer solution found and best possible) or
3. nodlim (on the total number of nodes allowed in the B&B tree).

It is also possible to set the maxNode and maxMipSol options to stop the global search: see section MIP Options for XPRESS control variables for MIP. The options loadMipSol, mipCleanup, mipTrace, mipTraceNode, and mipTraceTime control the behavior of the GAMS/XPRESS link on MIP models. The other options in section MIP Options set XPRESS library control variables, and can be used to fine-tune the XPRESS MIP solver.

### MIP Solution Pool

Typically, XPRESS finds a number of integer feasible points during its global search, but only the final solution is available. The MIP solution pool capability makes it possible to store multiple integer feasible points (aka solutions) for later processing. The MIP solution pool operates in one of two modes: by default (solnpoolPop = 1) the global search is not altered, but with (solnpoolPop = 2) a selected set (potentially all) of the integer feasible solutions are enumerated.

The MIP enumeration proceeds until all MIP solutions are enumerated or cut off, or until a user-defined limit is reached. Whenever a new solution is generated by the enumerator, it is presented to the solution pool manager. If there is room in the pool, the new solution is added. If the pool is full, a cull round is performed to select a number of solutions to be thrown out - these solutions can be those stored in the pool and/or the new solution. Solutions can be selected for culling based on their MIP objective value and/or the overall diversity of the solutions in the pool. If neither is chosen, a default choice is made to throw out one solution based on objective values. Whenever a solution is thrown out based on its MIP objective, the enumeration space is pruned based on the cutoff defined by this objective value.

By default, the capacity of the pool is set very large, as is the number of cull rounds to perform, so selecting only solnpoolPop = 2 will result in full enumeration. However, many different strategies can be executed by setting the solution pool options. For example, to choose the $$N$$-best solutions, simply set the solution pool capacity to $$N$$. When the pool is full, new solutions will force a cull round, and the default is to reject one solution based on its objective and update the cutoff accordingly. To generate all solutions with an objective as good as $$X$$, leave the pool capacity set at a high level but set the cutoff to $$X$$ using the mipabscutoff option. To return the $$N$$-first solutions, set the solution pool capacity to $$N$$ and solnpoolCullRounds = 0: as soon as the pool is full the enumeration will stop on the cull round limit.

A number of other strategies for controlling the solution pool behavior are possible by combining different options. Several working examples are provided in the GAMS Test Library in models xpress03.gms, xpress04.gms, and xpress05.gms.

See section MIP Solution Pool Options for XPRESS control variables for MIP Solution Pool.

### Newton-Barrier

The barrier method is invoked by default for quadratic problems, and can be selected for linear models by using one of the options

algorithm       barrier
defaultalg      4


The barrier method is likely to use more memory than the simplex method. No warm start is done, so if an advanced basis exists, you may not wish to use the barrier solver.

See section Newton-barrier Options for XPRESS control variables for the Newton-Barrier method.

## Nonlinear Programming

XPRESS can solve nonlinear programs of type NLP, CNS and MINLP (and its relaxed version) using the sequential linear programming solver XPRESS SLP or the interior-point / sequential quadratic programming solver XPRESS Knitro. Convexity is not required, but for non-convex programs XPRESS will in general find local optimal solutions only. The XPRESS multistart can be used to increase the likelihood of finding a good solution by starting from many different initial points.

XPRESS SLP solves nonlinear programs by successive linearization of the nonlinearities. These linearizations, which can be controlled by the options in NLP Augmentation and Linearization Options, are solved by the LP or QCP solver. Therefore, XPRESS user options for LP or QCP are also relevant when solving nonlinear programs. Note, that the NLP Presolve is independent from the LP presolve that is executed in each XPRESS SLP iteration.

### Termination

In most cases it is sufficient to control the termination of XPRESS SLP by slpIterLimit, reslim, slpValidationTargetK and slpValidationTargetR as the latter two automatically control the other XPRESS SLP convergence measures on default. More experienced users may want to modify the other convergence measures, which group into:

• Strict convergence: Describes the numerical behaviour of convergence in the formal, mathematical sense. User options: slpCTol, slpATolA, slpATolR.
• Extended convergence: Measures the quality of the linearization, including the effect of changes to the nonlinear terms that contribute to a variable in the linearization: User options: slpMTolA, slpMTolR, slpITolA, slpITolR, slpSTolA, slpSTolR.

When each variable has converged in one of the above cases, XPRESS SLP terminates based on the following stopping criteria:

• Baseline static objective convergence measure that compares changes in the objective over a given number of iterations relative to the average objective value. User options: slpVCount, slpVLimit, slpVTolA, slpVTolR.
• Static objective convergence measure that is applied when there are no unconverged variables in active constraints. User options: slpOCount, slpOTolA, slpOTolR.
• Static objective convergence measure that is applied when a pratical solution (all variables have converged and there are no active step bounds) has been found. User options: slpXCount, slpXLimit, slpXTolA, slpXTolR.
• Extended convergence continuation that is applied when a practical solution has been found. It checks if it is worth continuing. User options: slpWCount, slpWTolA, slpWTolR.

The user options slpConvergeATol,..., slpConvergeXTol enable/disable the different convergence measures and stopping conditions.

### Output

The output or logging can be controlled by the NLP Log Options. The default XPRESS SLP iteration output shows:

• It: Iteration number.
• LP: The LP status of the linearization (O: optimal; I: infeasible; U: unbounded; X: interrupted)
• NetObj: The net objective of the SLP iteration.
• ValObj: The original objective function value.
• ErrorSum: Sum of the error delta variables. A measure of infeasibility.
• ErrorCost: The value of the weighted error delta variables in the objective. A measure of the effort needed to push the model towards feasibility.
• Validate: Relative feasibility measure (calculated only if convergence is likely)
• KKT: Relative optimality measure (calculated only if convergence is likely)
• Unconv: The number of SLP variables that are not converged.
• Ext: The number of SLP variables that are converged, but only by extended criteria.
• Action: Special actions (0: failed line search; B: enforcing step bounds; E: some infeasible rows were enforced; G: global variables were fixed; P: solution needed polishing, postsolve instability; P!: solution polishing failed; R: penalty error vectors removed; V: feasibility validation induces further iterations; K: optimality validation induces further iterations)
• T: Time.

### XPRESS Knitro

Nonlinear programs can also be solved by XPRESS Knitro using the option slpSolver. In this case, the nonlinear program is passed to Knitro after the NLP Presolve. Setting slpSolver to auto will enable XPRESS to choose XPRESS SLP or XPRESS Knitro automatically based on the problem instance. XPRESS Knitro options can be specified in a Knitro solver option file, which needs to be selected in slpKnitroOptFile. Note, that XPRESS Knitro does not support all Knitro options. Specified but unsupported options trigger a warning and are then ignored.

For more information about this nonlinear programming solver, see the GAMS/Knitro documentation.

# Options

The tables that follow contain the XPRESS options. They are organized by function (e.g. LP or MIP) and also by type: some options control the behavior of the GAMS/XPRESS link and will be new even to experienced XPRESS users, while other options exist merely to set control variables in the XPRESS library and may be familiar to XPRESS users.

## General LP / MIP / QP Options

Option Description Default
advBasis use advanced basis provided by GAMS auto
algorithm choose between simplex and barrier algorithm
This option is used to select the barrier method to solve LPs. By default the barrier method will do a crossover to find a basic solution.
barrier: Use the barrier algorithm
simplex: Use the simplex algorithm
simplex
basisOut directs optimizer to output an MPS basis file
In general this option is not used in a GAMS environment, as GAMS maintains basis information for you automatically.
none
crash control for basis crashing procedure
A crash procedure is used to quickly find a good basis. This option is only relevant when no advanced basis is available.
2
deterministic control for deterministic behavior of concurrent solves 1
extraPresolve initial number of extra elements to allow for in the presolve
The space required to store extra presolve elements is allocated dynamically, so it is not necessary to set this control. In some cases, the presolve may terminate early if this is not increased.
0
lpIterLimit set the iteration limit for simplex solves
For MIP models, this is a per-node iteration limit for the B&B tree. Overrides the iterlim option.
mpsNameLength maximum length of MPS names in characters
Maximum length of MPS names in characters. Internally it is rounded up to the smallest multiple of 8. MPS names are right padded with blanks. Maximum value is 64.
0
mpsOutputFile Name of MPS output file
If specified XPRESS-MP will generate an MPS file corresponding to the GAMS model: the argument is the file name to be used. You can prefix the file name with an absolute or relative path.
none
presolve sets presolve strategy
0: presolve not applied
1: presolve applied
2: presolve applied, but redundant bounds are not removed
3: presolve applied, and redundant bounds always removed
1
reform substitute out objective var and equ when possible 1
reRun rerun with primal simplex when not optimal/feasible
Applies only in cases where presolve is turned on and the model is diagnosed as infeasible or unbounded. If rerun is nonzero, we rerun the model using primal simplex with presolve turned off in hopes of getting better diagnostic information. If rerun is zero, no good diagnostic information exists, so we return no solution, only an indication of unboundedness/infeasibility.
0
reslim overrides GAMS reslim option
Sets the resource limit. When the solver has used more than this amount of CPU time (in seconds) the system will stop the search and report the best solution found so far.
scaling bitmap control for internal scaling algorithm
Bitmap to determine how internal scaling is done. If set to 0, no scaling will take place. The default of 35 implies row and column scaling done by the maximum element method.
bit 0 = 1: Row scaling
bit 1 = 2: Column scaling
bit 2 = 4: Row scaling again
bit 3 = 8: Maximin
bit 4 = 16: Curtis-Reid
bit 5 = 32: Off implies scale by geometric mean, on implies scale by maximum element. Not applicable for maximin and Curtis-Reid scaling.
bit 6 = 64: Treat big-M rows as normal rows
bit 7 = 128: Scale objective function for the simplex method
bit 8 = 256: Exclude the quadratic part of constraint when calculating scaling factors
bit 9 = 512: Scale before presolve
bit 10: Do not scale rows up
bit 11: Do not scale columns down
bit 12: Do not apply automatic global objective scaling
bit 13: RHS scaling
bit 14: Disable aggressive quadratic scaling
bit 15: Enable explicit linear slack scaling
163
threads global default thread count
Controls the number of threads to use. Positive values will be compared to the number of available cores detected and reduced if greater than this amount. Non-positive values are interpreted as the number of cores to leave free so setting threads to 0 uses all available cores while setting threads to -1 leaves one core free for other tasks.
Range: {-∞, ..., ∞}
1
trace turns on output of infeasibility diagnosis during presolve
Control of the infeasibility diagnosis during presolve - if nonzero, infeasibility will be explained.
0
writePrtSol directs optimizer to output a "printsol" file

## LP Options

Option Description Default
bigM infeasibility penalty used in the "big M" method auto
bigMMethod controls use of "big M" method - 0=no, 1=yes
The alternative to using the big M method is to use a phase I / phase II simplex.
auto
concurrentThreads control for concurrent LP algorithm
If positive, determines the number of threads used to run the concurrent LP code. If -1, the threads control will determine the number of threads used for the LP solves. This control only affects the LP solves if the deterministic control is set to 0.
Range: {-1, ..., ∞}
-1
defaultAlg sets the default LP algorithm
1: automatic
2: dual simplex
3: primal simplex
4: Newton barrier
1
dualThreads number of threads for parallel dual simplex algorithm
If positive, determines the number of threads used to run the parallel dual simplex code. If -1, the threads control will be used.
Range: {-1, ..., ∞}
-1
etaTol zero tolerance on eta elements
During each iteration, the basis inverse is premultiplied by an elementary matrix, which is the identity except for one column the eta vector. Elements of eta vectors whose absolute value is smaller than etatol are taken to be zero in this step.
1e-13
feasTol zero tolerance for RHS and bound values
This is the zero tolerance on right hand side values, bounds and range values. If one of these is less than or equal to feastol in absolute value, it is treated as zero.
1e-06
invertFreq frequency of basis re-inversion
The frequency with which the basis will be inverted. A value of -1 implies automatic.
Range: {-1, ..., ∞}
auto
invertMin minimum number of iterations between basis re-inversion 3
lpLog print control for LP log
Specifies the frequency at which the simplex iteration log is printed.
n < 0: detailed output every -n iterations
n = 0: log displayed at the end of the solution process
n > 0: summary output every n iterations
100
lpThreads control for concurrent LP algorithm: alias for concurrentThreads
Range: {-1, ..., ∞}
-1
matrixTol zero tolerance on matrix elements
If the value of a matrix element is less than or equal to matrixtol in absolute value, it is treated as zero.
1e-9
optimalityTol zero tolerance on reduced costs
On each iteration, the simplex method searches for a variable to enter the basis which has a negative reduced cost. The candidates are only those variables which have reduced costs less than the negative value of optimalitytol.
1e-6
penalty minimum absolute penalty variable coefficient used in the "big M" method auto
pivotTol zero tolerance on pivot elements in simplex method
On each iteration, the simplex method seeks a nonzero matrix element to pivot on. Any element with absolute value less than pivottol is treated as zero for this purpose.
1e-9
pricingAlg determines the pricing method to use
At each iteration, the pricing method selects which variable enters the basis. In general DEVEX pricing requires more time on each iteration, but may reduce the total number of iterations, whereas partial pricing saves time on each iteration, although possibly results in more iterations.
-1: partial pricing
0: automatic
1: DEVEX pricing
2: Steepest edge
3: Steepest edge with unit initial weights
0
relPivotTol minimum size of pivot element relative to largest element in column
At each iteration a pivot element is chosen within a given column of the matrix. The relative pivot tolerance, relpivottol, is the size of the element chosen relative to the largest possible pivot element in the same column.
1e-6

## MIP Options

Option Description Default
backTrack determines selection of next node in case of a full backtrack
1: Unused
2: Select the node with the best estimated solution
3: Select the node with the best bound on the solution
4: Select the deepest node in the search tree (aka DFS)
5: Select the highest node in the search tree (aka BFS)
6: Select the earliest node created
7: Select the latest node created
8: Select a node randomly
9: Select the node whose LP relaxation contains the fewest number of infeasible global entities
10: Combination of 2 and 9
11: Combination of 2 and 4
12: Combination of 3 and 4
3
breadthFirst determines number of nodes to include in a breadth-first search
Used only if nodeselection = 4.
Range: {1, ..., ∞}
11
coverCuts number of rounds of lifted cover inequalities at the top node
A lifted cover inequality is an additional constraint that can be particularly effective at reducing the size of the feasible region without removing potential integral solutions. The process of generating these can be carried out a number of times, further reducing the feasible region, albeit incurring a time penalty. There is usually a good payoff from generating these at the top node, since these inequalities then apply to every subsequent node in the tree search.
auto
cutDepth maximum depth in search tree at which cuts will be generated
Generating cuts can take a lot of time, and is often less important at deeper levels of the tree since tighter bounds on the variables have already reduced the feasible region. A value of 0 signifies that no cuts will be generated.
Range: {-1, ..., ∞}
auto
cutFreq frequency at which cuts are generated in the tree search
If the depth of the node modulo cutfreq is zero, then cuts will be generated.
Range: {-1, ..., ∞}
auto
cutStrategy specifies the cut strategy
An aggressive cut strategy, generating a greater number of cuts, will result in fewer nodes to be explored, but with an associated time cost in generating the cuts. The fewer cuts generated, the less time taken, but the greater subsequent number of nodes to be explored.
-1: automatic
0: no cuts
1: conservative cut strategy
2: moderate cut strategy
3: aggressive cut strategy
-1
gomCuts number of rounds of Gomery cuts at the top node
Gomory cuts can always be generated if the current node does not yield an integral solution. However, they are usually not as effective as lifted cover inequalities in reducing the size of the feasible region.
auto
heurThreads number of threads for running parallel root node heuristics
If positive, determines the number of root threads dedicated to running parallel heuristics. If 0, heuristics are run sequentially with the root LP solver and cutting. If -1, the threads control will be used as the default.
Range: {-1, ..., ∞}
0
loadMipSol loads a MIP solution (the initial point)
If true, the initial point provided by GAMS will be passed to the optimizer to be treated as an integer feasible point. The optimizer uses the values for the discrete variables only: the level values for the continuous variables are ignored and are calculated by fixing the integer variables and reoptimizing. In some cases, loading an initial MIP solution can improve performance. In addition, there will always be a feasible solution to return.
0
maxMipSol maximum number of integer solutions in MIP tree search
This specifies a limit on the number of integer solutions to be found (the total number, not necessarily the number of distinct solutions). 0 means no limit.
0
maxNode maximum number of nodes to explore in MIP tree search
If the GAMS nodlim model suffix is set, that setting takes precedence.
maxint
mipAbsCutoff nodes with objective worse than this value are ignored
If the user knows that they are interested only in values of the objective function which are better than some value, this can be assigned to mipabscutoff. This allows the Optimizer to ignore solving any nodes which may yield worse objective values, saving solution time.
Range: [-∞, ∞]
auto
mipAbsStop stopping tolerance for gap: if met XPRESS returns proven optimal
The global search is stopped if the gap is reduced to this value. This check is implemented in the Optimizer library, and if the search is stopped on this check the Optimizer returns a status of proven optimal. For this reason you should use the GAMS <modelname>.optca parameter instead of this option.
0.0
mipAddCutoff amount to add to MIP incumbent to get the new cutoff
Once an integer solution has been found whose objective function is equal to or better than mipabscutoff, improvements on this value may not be interesting unless they are better by at least a certain amount. If mipaddcutoff is nonzero, it will be added to mipabscutoff each time an integer solution is found which is better than this new value. This cuts off sections of the tree whose solutions would not represent substantial improvements in the objective function, saving processor time. Note that this should usually be set to a negative number for minimization problems, and positive for maximization problems. Notice further that the maximum of the absolute and relative cut is actually used.
Range: [-∞, ∞]
-1e-5
mipCleanup clean up the MIP solution (round-fix-solve) to get duals
If nonzero, clean up the integer solution obtained, i.e. round and fix the discrete variables and re-solve as an LP to get some marginal values for the discrete vars.
1
mipLog print control for MIP log
0: no printout in global
1: only print out summary statement at the end
2: print out detailed log at all solutions found
3: print out detailed log at each node
n < 0: Print out summary log at each nth node, or when a new solution is found
-100
mipPresolve bitmap controlling the MIP presolve
If set to 0, no presolve will be performed.
bit 0 = 1: reduced cost fixing will be performed at each node
bit 1 = 2: primal reductions will be performed at each node
bit 2 = 4: unused
bit 3 = 8: node preprocessing is allowed to change bounds on continuous columns
bit 4 = 16: dual reductions will be performed at each node
bit 5 = 32: allow global (non-bound) tightening of the problem during the tree search
bit 6 = 64: the objective function will be used to find reductions at each node
bit 7 = 128: Allow the branch-and-bound tree search to be restarted if it appears to be advantageous
bit 8 = 256: Allow that symmetry is used to presolve the node problem
-257
mipRelCutoff relative difference between the MIP incumbent and the new cutoff
Percentage of the LP solution value to be added to the value of the objective function when an integer solution is found, to give the new value of mipabscutoff. The effect is to cut off the search in parts of the tree whose best possible objective function would not be substantially better than the current solution.
1e-4
mipRelStop stopping tolerance for relative gap: if met XPRESS returns proven optimal
The global search is stopped if the relative gap is reduced to this value. This check is implemented in the Optimizer library, and if the search is stopped on this check the Optimizer returns a status of proven optimal. For this reason you should use the GAMS <modelname>.optcr parameter instead of this option.
1e-4
mipstopexpr stopping expression for branch and bound
If the provided logical expression is true, the branch-and-bound is aborted. Supported values are: resusd, nodusd, objest, objval. Supported operators are: +, -, *, /, ^, %, !=, ==, <, <=, >, >=, !, &&, $$\vert \vert$$, (, ), abs, ceil, exp, floor, log, log10, pow, sqrt . Example: nodusd >= 1000 && abs(objest - objval) / abs(objval) < 0.1
mipThreads number of threads for parallel mip algorithm
If positive, determines the number of threads used to run the parallel MIP code. If -1, the threads control will be used.
Range: {-1, ..., ∞}
-1
mipTol integrality tolerance for discrete vars
This is the tolerance within which a decision variables value is considered to be integral.
5e-6
mipTrace name of MIP trace file
A miptrace file with the specified name will be created. This file records the best integer and best bound values every miptracenode nodes and at miptracetime-second intervals.
none
mipTraceNode node interval between MIP trace file entries 100
mipTraceTime time interval, in seconds, between MIP trace file entries 5
nodeSelection sets node selection strategy
This determines which nodes will be considered for solution once the current node has been solved.
1: local first: choose between descendant and sibling nodes if available, o/w from all outstanding nodes
2: best first: choose from all outstanding nodes
3: local depth first: choose between descendant and sibling nodes if available, o/w from the deepest nodes
4: best first, then local first: best first for the first BREADTHFIRST nodes, then local first is used
5: pure depth first: choose from the deepest outstanding nodes
auto
objGoodEnough stop once an objective this good is found none
preProbing control probing done on binary variables during presolve
This is done by fixing a binary to each of its values in turn and analyzing the implications.
-1: automatic
0: disabled
1: light probing - only few implications will be examined
2: full probing - all implications for all binaries will be examined
3: full probing and repeat as long as the problem is significantly reduced
-1
pseudoCost default pseudo-cost
The default pseudo cost used in estimation of the degradation associated with an unexplored node in the tree search. A pseudo cost is associated with each integer decision variable and is an estimate of the amount by which the objective function will be worse if that variable is forced to an integral value.
1e-2
sleepOnThreadWait control behavior of waiting threads in a MIP solve auto
symmetry adjust overall amount of effort in symmetry detection
0: no symmetry detection
1: conservative effort
2: intensive symmetry search
1
symSelect adjust what is searched in symmetry detection
-1: automatic
0: search the whole matrix (otherwise the 0, 1, and -1 coefs only)
1: search all entities (otherwise binaries only)
-1
treeCoverCuts number of rounds of lifted cover inequalities at tree nodes
The number of rounds of lifted cover inequalities generated at nodes other than the top node in the tree. Compare with the description for covercuts. A value of -1 indicates the number of rounds is determined automatically.
auto
treeGomCuts number of rounds of Gomery cuts at tree nodes
The number of rounds of Gomory cuts generated at nodes other than the top node in the tree. Compare with the description for gomcuts. A value of -1 indicates the number of rounds is determined automatically.
auto
treePresolve amount of full presolving to apply at tree nodes
-1: automatic
0: disabled
1: cautious strategy - only when significant reductions possible
2: medium strategy
3: aggressive strategy - most frequently
-1
treePresolveKeepBasis control use of existing basis when presolving at tree nodes
0: drop basis and resolve node from scratch
1: presolve/preserve the basis and warm-start
2: ignore the basis during presolve and attempt warm-start
auto
varSelection determines how to use pseudo-costs
This determines how to combine the pseudo costs associated with the integer variables to obtain an overall estimated degradation in the objective function that may be expected by branching on a given integer variable. The variable selected to be branched on is the one with the maximum estimate.
-1: automatic
0: unused
1: the minimum of the up and down pseudo costs
2: the up pseudo cost plus the down pseudo cost
3: the max of the up and down pseudo costs, plus twice the min of the up and down pseudo costs
4: the maximum of the up and down pseudo costs
5: the down pseudo cost
6: the up pseudo cost
7: a weighted combination of the up and down pseudo costs, where the weights depend on how fractional the variable is
8: the product of the up and down pseudo costs
-1

## MIP Solution Pool Options

Option Description Default
solnpool solution pool file name
If set, the integer feasible solutions generated during the global search will be saved to a solution pool. A GDX file whose name is given by this option will be created and will contain an index to separate GDX files containing the individual solutions in the solution pool.
none
solnpoolCapacity limit on number of solutions to store
Range: {1, ..., ∞}
999999999
solnpoolCullDiversity cull N solutions based on solution diversity
When performing a round of culls due to a full solution pool, this control sets the maximum number to cull based on the diversity of the solutions in the pool.
Range: {-1, ..., ∞}
-1
solnpoolCullObj cull N solutions based on objective values
When performing a round of culls due to a full solution pool, this control sets the maximum number to cull based on the MIP objective function.
Range: {-1, ..., ∞}
-1
solnpoolCullRounds terminate solution generation after N culling rounds
Limits the rounds of culls performed due to a full solution pool.
999999999
solnpoolDupPolicy sets policy for detecting/storing duplicate solutions
Determines whether to check for duplicate solutions when adding to the MIP solution pool, and what method is used to check for duplicates.
0: keep all
1: compare all vars, exact matches discarded
2: compare rounded discrete, exact continuous
3: compare rounded discrete only
3
solnpoolmerge solution pool file name for merged solutions none
solnpoolnumsym maximum number of variable symbols when writing merged solutions
Range: {1, ..., ∞}
10
solnpoolPop controls method used to populate the solution pool
By default the MIP solution pool merely stores the incumbent solutions that are found during the global search, without changing the behavior of the search itself. In constrast, the MIP solution enumerator makes it possible to enumerate all or many of the feasible solutions for the MIP, instead of searching for the best solution.
1: generate solutions using the normal search algorithm
2: invoke the solution enumerator to generate solutions
1
solnpoolPrefix file name prefix for GDX solution files soln
solnpoolVerbosity controls verbosity of solution pool routines
-1: no output
0: output only messages coming from the XPRESS libraries
1: add some messages logging the effect of solution pool options
2: debugging mode
0

## QP Options

Option Description Default
eigenvalueTol zero tolerance for negative eigenvalues of quadratic matrices
A quadratic matrix is considered not to be positive semi-definite if its smallest eigenvalue is smaller than the negative of this value.
1e-6
ifCheckConvexity controls convexity check for QP models - 0=no, 1=yes
Applies to quadratic, mixed integer quadratic and quadratically constrained problems. Checking convexity takes some time, thus for problems that are known to be convex it might be reasonable to switch the checking off.
1

## Newton-barrier Options

Option Description Default
barAlg determines which barrier algorithm to use
-1: automatic
0: unused
1: infeasible-start barrier alg
2: homogeneous self-dual barrier alg
3: start with 2 optionally switch to 1
-1
barCrash determines the type of crash used for the crossover from barrier
0: Turn off all crash procedures
1-6: From 1-most conservative to 6-most aggressive
4
barDualStop stopping tolerance for dual infeasibilities in barrier: 0=auto
The dual constraint residuals must be smaller than this value for the current point to be considered dual feasible.
auto
barGapStop stopping tolerance for relative duality gap in barrier: 0=auto
The gap between the primal and dual solutions must be smaller than this value for the current point to be considered optimal.
auto
barIndefLimit limit consecutive indefinite barrier iterations that will be performed
For QP models, once this limit is hit, the problem will be reported to be indefinite.
Range: {1, ..., ∞}
15
barIterLimit maximum number of barrier iterations 500
barOrder controls the Cholesky factorization in barrier
0: automatic
1: Minimum degree method. This selects diagonal elements with the smallest number of nonzeros in their rows or columns.
2: Minimum local fill method. This considers the adjacency graph of nonzeros in the matrix and seeks to eliminate nodes that minimize the creation of new edges.
3: Nested dissection method. This considers the adjacency graph and recursively seeks to separate it into non-adjacent pieces.
auto
barOutput controls the level of solution output from barrier
0: No output
1: At each iteration
1
barPrimalStop stopping tolerance for primal infeasibilities in barrier: 0=auto
The primal constraint residuals must be be smaller than this value for the current point to be considered primal feasible.
auto
barStart controls the computation of the barrier starting point
0: automatic
1: uses simple heuristics to compute the starting point based on the magnitudes of the matrix entries
2: uses the pseudoinverse of the constraint matrix to determine primal and dual initial solutions
0
barStepStop stopping tolerance on the step size of the barrier search direction
If the step size is smaller, the current solution will be returned.
1e-16
barThreads number of threads for parallel barrier algorithm
Range: {-1, ..., ∞}
-1
cpuPlatform selects vectorized instruction set to use for barrier method
Generic code and SSE2 or AVX optimized code will result in a deterministic or reproducible solution path. AVX2 code may result in a nondeterministic solution path.
-2: Highest supported: generic, SSE2, AVX or AVX2
-1: Highest supported deterministic: generic, SSE2 or AVX
0: generic code compatible with all CPUs
1: SSE2 optimized code
2: AVX optimized code
3: AVX2 optimized code
-1
crossover crossover control for barrier method
Determines whether and how the barrier method will cross over to the simplex method when an optimal solution has been found, in order to provide an end basis.
-1: automatic
0: no crossover
1: primal crossover first
2: dual crossover first
auto
crossoverThreads number of threads for parallel barrier algorithm
If positive, determines the number of threads used to run the crossover code. If -1, the threads control will determine the number of threads used for the crossover.
Range: {-1, ..., ∞}
-1
denseColLimit controls trigger point for special treatment of dense columns in Cholesky factorization 0

## General NLP / MINLP Options

Option Description Default
slpAlgorithmCascadeBounds Step bounds are updated to accomodate cascaded values (otherwise cascaded values are pushed to respect step bounds)
Normally, cascading will respect the step bounds of the SLP variable being cascaded. However, allowing the cascaded value to fall outside the step bounds (i.e. expanding the step bounds) can lead to better linearizations, as cascading will set better values for the SLP variables regarding their determining rows; note, that this later strategy might interfere with convergence of the cascaded variables.
0: Disable
1: Enable
0
slpAlgorithmClampExtendedActiveSB Apply clamping when converged on extended criteria only with some variables having active step bounds
When clamping is applied, then in any iteration when the solution would normally be deemed converged on extended criteria only, an extra step bound shrinking step is applied to help imposing strict convergence. In this variant, clamping is only applied on variables that have converged on extended criteria only and have active step bounds.
0: Disable
1: Enable
0
slpAlgorithmClampExtendedAll Apply clamping when converged on extended criteria only
When clamping is applied, then in any iteration when the solution would normally be deemed converged on extended criteria only, an extra step bound shrinking step is applied to help imposing strict convergence. In this variant, clamping is applied on all variables that have converged on extended criteria only.
0: Disable
1: Enable
0
slpAlgorithmDynamicDamping Use dynamic damping
Dynamic damping is sometimes an alternative to step bounding as a means of encouraging convergence, but it does not have the same power to force convergence as do step bounds.
0: Disable
1: Enable
0
slpAlgorithmEscalatePenalties Escalate penalties
Constraint penalties are increased after each SLP iteration where penalty vectors are present in the solution. Escalation applies an additional scaling factor to the penalty costs for active errors. This helps to prevent successive solutions becoming "stuck" because of a particular constraint, because its cost will be raised so that other constraints may become more attractive to violate instead and thus open up a new region to explore.
0: Disable
1: Enable
0
slpAlgorithmEstimateStepBounds Estimate step bounds from early SLP iterations
If initial step bounds are not being explicitly provided, this gives a good method of calculating reasonable values. Values will tend to be larger rather than smaller, to reduce the risk of infeasibility caused by excessive tightness of the step bounds.
0: Disable
1: Enable
1
slpAlgorithmHoldValues Do not update values which are converged within strict tolerance
Models which are numerically unstable may benefit from this setting, which does not update values which have effectively hardly changed. If a variable subsequently does move outside its strict convergence tolerance, it will be updated as usual.
0: Disable
1: Enable
0
slpAlgorithmMaxCostOption Continue optimizing after penalty cost reaches maximum
Normally if the penalty cost reaches its maximum (by default the value of Xpress infinity), the optimization will terminate with an unconverged solution. If the maximum value is set to a smaller value, then it may make sense to continue, using other means to determine when to stop.
0: Disable
1: Enable
0
slpAlgorithmNoLPPolishing Skip the solution polishing step if the LP postsolve returns a slightly infeasible, but claimed optimal solution
Due to the nature of the SLP linearizations, and in particular because of the large differences in the objective function (model objective against penalty costs) some dual reductions in the linear presolver might introduce numerically instable reductions that cause slight infeasibilities to appear in postsolve. It is typically more efficient to remove these infeasibilities with an extra call to the linear optimizer; compared to switching these reductions off, which usually has a significant cost in performance. This bit is provided for numerically very hard problems, when the polishing step proves to be too expensive (Xpress-SLP will report these if any in the final log summary).
0: Disable
1: Enable
0
slpAlgorithmNoStepBounds Do not apply step bounds
The default algorithm uses step bounds to force convergence. Step bounds may not be appropriate if dynamic damping is used.
0: Disable
1: Enable
0
slpAlgorithmQuickConvergenceCheck Quick convergence check
Normally, each variable is checked against all convergence criteria until either a criterion is found which it passes, or it is declared "not converged". Later (extended convergence) criteria are more expensive to test and, once an unconverged variable has been found, the overall convergence status of the solution has been established. The quick convergence check carries out checks on the strict criteria, but omits checks on the extended criteria when an unconverged variable has been found.
0: Disable
1: Enable
1
slpAlgorithmResetDeltaZ Reset slpDeltaZ to zero when converged and continue SLP
One of the mechanisms to avoid local optima is to retain small non-zero coefficients between delta vectors and constraints, even when the coefficient should strictly be zero. If this option is set, then a converged solution will be continued with zero coefficients as appropriate.
0: Disable
1: Enable
0
slpAlgorithmResidualErrors Accept a solution which has converged even if there are still significant active penalty error vectors
Normally, the optimization will continue if there are active penalty vectors in the solution. However, it may be that there is no feasible solution (and so active penalties will always be present). Setting this bit means that, if other convergence criteria are met, then the solution will be accepted as converged and the optimization will stop.
0: Disable
1: Enable
0
slpAlgorithmRetainPreviousValue Retain previous value when cascading if determining row is zero
If the determining row is zero (that is, all the coefficients interacting with it are either zero or in columns with a zero activity), then it is impossible to calculate a new value for the vector being cascaded. The choice is to use the solution value as it is, or to revert to the assumed value.
0: Disable
1: Enable
1
slpAlgorithmStepBoundsAsRequired Apply step bounds to SLP delta vectors only when required
Step bounds can be applied to all vectors simultaneously, or applied only when oscillation of the delta vector (change in sign between successive SLP iterations) is detected.
0: Disable
1: Enable
1
slpAlgorithmSwitchToPrimal Use the primal simplex algorithm when all error vectors become inactive
The primal simplex algorithm often performs better than dual during the final stages of SLP optimization when there are relatively few basis changes between successive solutions. As it is impossible to establish in advance when the final stages are being reached, the disappearance of error vectors from the solution is used as a proxy.
0: Disable
1: Enable
0
slpCalcThreads Number of threads used for formula and derivatives evaluations
When beneficial, SLP can calculate formula values and partial derivative information in parallel. When set to -1 (auto), threads is used.
Range: {-1, ..., ∞}
auto
slpFilterKeepBest retrain solution best according to the merit function
0: Disable
1: Enable
1
slpFilterZeroLineSearch force minimum step sizes in line search
0: Disable
1: Enable
0
slpFilterZeroLineSearchTR accept the trust region step if the line search returns a zero step size
0: Disable
1: Enable
0
slpFindIV Option for running a heuristic to find a feasible initial point
The procedure uses bound reduction (and, up to an extent, probing) to obtain a point in the initial bounding box that is feasible for the bound reduction techniques. If an initial point is already specified and is found not to violate bound reduction, then the heuristic is not run and the given point is used as the initial solution.
-1: Automatic (default)
0: Disable the heuristic
1: Enable the heuristic
auto
slpInfinity Value returned by a divide-by-zero in a formula
Range: [0.0, ∞]
1.0e+10
slpKnitroOptFile Option file for NLP solver KNITRO
slpPrimalIntegralRef Reference solution value to take into account when calculating the primal integral
When a global optimum is known, this can used to calculate a globally valid primal integral. It can also be used to indicate the target objective value still to be taken into account in the integral.
Range: [-∞, ∞]
1.0e+20
slpScale When to re-scale the SLP problem
During the SLP optimization, matrix entries can change considerably in magnitude, even when the formulae in the coefficients are not very nonlinear. Re-scaling of the matrix can reduce numerical errors, but may increase the time taken to achieve convergence.
0: No re-scaling
1: Re-scale every SLP iteration up to slpScaleCount iterations after the end of barrier optimization
2: Re-scale every SLP iteration up to slpScaleCount iterations in totaler optimization
3: Re-scale every SLP iteration until primal simplex is automatically invoked
4: Re-scale every SLP iteration
5: Re-scale every slpScaleCount SLP iterationsns in totaler optimization
6: Re-scale every slpScaleCount SLP iterations after the end of barrier optimization
1
slpScaleCount Iteration limit used in determining when to re-scale the SLP matrix
If slpScale is set to 1 or 2, then slpScaleCount determines the number of iterations (after the end of barrier optimization or in total) in which the matrix is automatically re-scaled.
Range: {0, ..., ∞}
0
slpSolver First order differentiation mode when using analytical derivatives
-1: automatic selection, based on model characteristics and solver availability
0: use Xpress-SLP (always available)
1: use Knitro if available
auto
slpThreads Default number of threads to be used
Overall thread control value, used to determine the number of threads used where parallel calculations are possible. When set to -1 (auto), threads is used.
Range: {-1, ..., ∞}
auto
slpZero Absolute tolerance
If a value is below slpZero in magnitude, then it will be regarded as zero in certain formula calculations: an attempt to divide by such a value will give a "divide by zero" error; an exponent of a negative number will produce a "negative number, fractional exponent" error if the exponent differs from an integer by more than slpZero.
Range: [0.0, ∞]
1.0e-15

## NLP Presolve Options

Option Description Default
slpLinQuadBR Use linear and quadratic constraints and objective function to further reduce bounds on all variables
While bound reduction is effective when performed on nonlinear, nonquadratic constraints and objective function, it can be useful to obtain tightened bounds from linear and quadratic constraints, as the corresponding variables may appear in other nonlinear constraints. This option then allows for a slightly more expensive bound reduction procedure, at the benefit of further reduction in the problem's bounds.
-1: automatic selection
0: disable
1: enable
auto
slpPostsolve This control determines whether postsolving should be performed automatically
0: Do not automatically postsolve
1: Postsolve automatically
0
slpPresolve This control determines whether presolving should be performed prior to starting the main algorithm
The Xpress NonLinear nonlinear presolve (which is carried out once, before augmentation) is independent of the Optimizer presolve (which is carried out during each SLP iteration).
0: Disable SLP presolve
1: Activate SLP presolve
2: Low memory presolve. Original problem is not restored by postsolve and dual solution may not be completely postsolved
1
slpPresolveLevel This control determines the level of changes presolve may carry out on the problem
slpPresolveOpsDomain, ..., slpPresolveOpsSetBounds controls the operations carried out in presolve. slpPresolveLevel controls how those operations may change the problem.
1: Individual rows only presolve, no nonlinear transformations.
2: Individual rows and bounds only presolve, no nonlinear transformations.
3: Presolve allowing changing problem dimension, no nonlinear transformations.
4: Full presolve.
4
slpPresolveOpsDomain Bound tightening based on function domains
0: Disable
1: Enable
1
slpPresolveOpsEliminations Allow eliminations on determined variables
0: Disable
1: Enable
1
slpPresolveOpsFixAll Explicitly fix all columns identified as fixed
0: Disable
1: Enable
0
slpPresolveOpsFixZero Explicitly fix columns identified as fixed to zero
0: Disable
1: Enable
0
slpPresolveOpsGeneral Generic SLP presolve
0: Disable
1: Enable
0
slpPresolveOpsIntBounds MISLP bound tightening
0: Disable
1: Enable
1
slpPresolveOpsNoCoefficients Do not presolve coefficients
0: Disable
1: Enable
0
slpPresolveOpsNoDeltas Do not remove delta variables
0: Disable
1: Enable
0
slpPresolveOpsNoDualSide Avoid reductions that can not be dual postsolved
0: Disable
1: Enable
0
slpPresolveOpsSetBounds SLP bound tightening
0: Disable
1: Enable
1
slpPresolvePassLimit Maximum number of passes through the problem to improve SLP bounds
The Xpress NonLinear nonlinear presolve (which is carried out once, before augmentation) is independent of the Optimizer presolve (which is carried out during each SLP iteration). The procedure carries out a number of passes through the SLP problem, seeking to tighten implied bounds or to identify fixed values. slpPresolvePassLimit can be used to change the maximum number of passes carried out.
Range: {0, ..., ∞}
20
slpPresolveZero Minimum absolute value for a variable which is identified as nonzero during SLP presolve
During the SLP (nonlinear)presolve, a variable may be identified as being nonzero (for example, because it is used as a divisor). A bound of plus or minus slpPresolveZero will be applied to the variable if it is identified as non-negative or non-positive.
Range: [0.0, ∞]
1.0e-09
slpProbing This control determines whether probing on a subset of variables should be performed prior to starting the main algorithm. Probing runs multiple times bound reduction in order to further tighten the bounding box
The Xpress NonLinear nonlinear probing, which is carried out once, is independent of the Optimizer presolve (which is carried out during each SLP iteration). The probing level allows for probing on an expanding set of variables, allowing for probing on all variables (level 5) or only those for which probing is more likely to be useful (binary variables).
-1: Automatic
0: Disable SLP probing
1: Activate SLP probing only on binary variables
2: Activate SLP probing only on binary or unbounded integer variables
3: Activate SLP probing only on binary or integer variables
4: Activate SLP probing only on binary, integer variables, and unbounded continuous variables
5: Activate SLP probing on any variable
auto

## NLP Augmentation and Linearization Options

Option Description Default
slpAugmentAllErrorVectors Penalty error vectors on all non-linear inequality constraints
The linearization of a nonlinear constraint is inevitably an approximation and so may not be feasible except at the point of linearization. Adding penalty error vectors allows the linear approximation to be violated at a cost and so ensures that the linearized constraint is feasible.
0: Disable
1: Enable
1
slpAugmentAllRowErrorVectors Penalty error vectors on all constraints
If the linear portion of the underlying model may actually be infeasible, then applying penalty vectors to all rows may allow identification of the infeasibility and may also allow a useful solution to be found.
0: Disable
1: Enable
0
slpAugmentAMeanWeight Use arithmetic means to estimate penalty weights
Penalty weights are estimated from the magnitude of the elements in the constraint or interacting rows. Geometric means are normally used, so that a few excessively large or small values do not distort the weights significantly. Arithmetic means will value the coefficients more equally.
0: Disable
1: Enable
0
slpAugmentEqualityErrorVectors Penalty error vectors on all non-linear equality constraints
The linearization of a nonlinear equality constraint is inevitably an approximation and so will not generally be feasible except at the point of linearization. Adding penalty error vectors allows the linear approximation to be violated at a cost and so ensures that the linearized constraint is feasible.
0: Disable
1: Enable
1
slpAugmentEvenHanded Even handed augmentation
Standard augmentation treats variables which appear in non-constant coefficients in a different way from those which contain non-constant coefficients. Even-handed augmentation treats them all in the same way by replacing each non-constant coefficient C in a vector V by a new coefficient C*V in the "equals" column (which has a fixed activity of 1) and creating delta vectors for all types of variable in the same way.
0: Disable
1: Enable
0
slpAugmentMinimum Minimum augmentation
Standard augmentation includes delta vectors for all variables involved in nonlinear terms (in non-constant coefficients or as vectors containing non-constant coefficients). Minimum augmentation includes delta vectors only for variables in non-constant coefficients. This produces a smaller linearization, but there is less control on convergence, because convergence control (for example, step bounding) cannot be applied to variables without deltas.
0: Disable
1: Enable
0
slpAugmentNoUpdateIfOnlyIV Intial values do not imply an SLP variable
Having an initial value will not cause the augmentation to include the corresponding delta variable; i.e. treat the variable as an SLP variable. Useful to provide initial values necessary in the first linearization in case of a minimal augmentation, or as a convenience option when it's easiest to set an initial value for all variables for some reason.
0: Disable
1: Enable
0
slpAugmentPenaltyDeltaVectors Penalty vectors to exceed step bounds
Although it has rarely been found necessary or desirable in practice, Xpress-SLP allows step bounds to be violated at a cost. This may help with feasibility but it generally slows down or prevents convergence, so it should be used only if found absolutely necessary.
0: Disable
1: Enable
0
slpAugmentSBFromAbsValues Estimate step bounds from absolute values of row coefficients
If step bounds are to be imposed from the start, the best approach is to provide explicit values for the bounds. Alternatively, Xpress-SLP can estimate the values from the largest estimated magnitude of the coefficients in the relevant rows.
0: Disable
1: Enable
0
slpAugmentSBFromValues Estimate step bounds from values of row coefficients
If step bounds are to be imposed from the start, the best approach is to provide explicit values for the bounds. Alternatively, Xpress-SLP can estimate the values from the range of estimated coefficient sizes in the relevant rows.
0: Disable
1: Enable
0
slpAugmentStepBoundRows Row-based step bounds
Step bounds are normally applied as bounds on the delta variables. Some applications may find that using explicit rows to bound the delta vectors gives better results.
0: Disable
1: Enable
0
slpDeltaX Minimum absolute value of delta coefficients to be retained
If the value of a coefficient in a delta column is less than this value, it will be reset to zero. Larger values of slpDeltaX will result in matrices with fewer elements, which may be easier to solve. However, there will be increased likelihood of local optima as some of the small relationships between variables and constraints are deleted. There may also be increased difficulties with singular bases resulting from deletion of pivot elements from the matrix.
Range: [0.0, ∞]
1.0e-06
slpFeasTolTarget When set, this defines a target feasibility tolerance to which the linearizations are solved to
This is a soft version of feasTol, and will dynamically revert back to feasTol if the desired accuracy could not be achieved.
Range: [0.0, ∞]
0.0
slpMatrixTol Provides an override value for matrixTol, which controls the smallest magnitude of matrix coefficents
Any value smaller than slpMatrixTol in magnitude will not be loaded into the linearization. This only applies to the matrix coefficients; bounds, right hand sides and objectives are not affected.
Range: [0.0, ∞]
1.0e-30
slpOptimalityTolTarget When set, this defines a target optimality tolerance to which the linearizations are solved to
This is a soft version of optimalityTol, and will dynamically revert back to optimalityTol if the desired accuracy could not be achieved.
Range: [0.0, ∞]
0.0
slpUnfinishedLimit Number of times within one SLP iteration that an unfinished LP optimization will be continued
If the optimization of the current linear approximation terminates with an "unfinished" status (for example, because it has reached maximum iterations), Xpress-SLP will attempt to continue using the primal simplex algorithm. This process will be repeated for up to slpUnfinishedLimit successive LP optimizations within any one SLP iteration. If the limit is reached, Xpress-SLP will terminate.
Range: {0, ..., ∞}
3
slpZeroCriterionCount Number of consecutive times a placeholder entry is zero before being considered for deletion
Range: {0, ..., ∞}
0
slpZeroCriterionDeltaNBDRRow Remove placeholders in a basic delta variable if the determining row for the corresponding SLP variable is nonbasic
0: Disable
1: Enable
0
slpZeroCriterionDeltaNBUpdateRow Remove placeholders in a basic delta variable if its update row is nonbasic and the corresponding SLP variable is nonbasic
0: Disable
1: Enable
0
slpZeroCriterionNBDelta Remove placeholders in nonbasic delta variables
0: Disable
1: Enable
0
slpZeroCriterionNBSLPVar Remove placeholders in nonbasic SLP variables
0: Disable
1: Enable
0
slpZeroCriterionPrint Print information about zero placeholders
0: Disable
1: Enable
0
slpZeroCriterionSLPVarNBUpdateRow Remove placeholders in a basic SLP variable if its update row is nonbasic
0: Disable
1: Enable
0
slpZeroCriterionStart SLP iteration at which criteria for deletion of placeholder entries are first activated
Range: {0, ..., ∞}
0

## NLP Barrier Options

Option Description Default
slpBarCrossOverStart Default crossover activation behaviour for barrier start
When slpBarLimit is set, slpBarCrossOverStart offers an overwrite control on when crossover is applied. A positive value indicates that crossover should be disabled in iterations smaller than slpBarCrossOverStart and should be enabled afterwards, or when stalling is detected as described in slpBarStartAllowInteriorSol, ..., slpBarStartStallingObjective. A value of 0 indicates to respect the value of crossover and only overwrite its value when stalling is detected. A value of -1 indicates to always rely on the value of crossover.
Range: {0, ..., ∞}
0
slpBarLimit Number of initial SLP iterations using the barrier method
Particularly for larger models, using the Newton barrier method is faster in the earlier SLP iterations. Later on, when the basis information becomes more useful, a simplex method generally performs better. slpBarLimit sets the number of SLP iterations which will be performed using the Newton barrier method.
Range: {0, ..., ∞}
0
slpBarStallingLimit Number of iterations to allow numerical failures in barrier before switching to dual
On large problems, it may be beneficial to warm start progress by running a number of iterations with the barrier solver as specified by slpBarLimit. On some numerically difficult problems, the barrier may stop prematurely due to numerical issues. Such solves can sometimes be finished if crossover is applied. After slpBarStallingLimit such attempts, SLP will automatically switch to use the dual simplex.
Range: {0, ..., ∞}
3
slpBarStallingObjLimit Number of iterations over which to measure the objective change for barrier iterations with no crossover
On large problems, it may be beneficial to warm start progress by running a number of iterations with the barrier solver without crossover by setting slpBarLimit to a positive value and setting crossover to 0. A potential drawback is slower convergence due to the interior point provided by the barrier solve keeping a higher number of variables active. This may lead to stalling in progress, negating the benefit of using the barrier. When in the last slpBarStallingObjLimit iterations no significant progress has been made, crossover is automatically enabled.
Range: {0, ..., ∞}
3
slpBarStallingTol Required change in the objective when progress is measured in barrier iterations without crossover
Minumum objective variability change required in relation to control slpBarStallingObjLimit for the iterations to be regarded as making progress. The net objective, error cost and error sum are taken into account.
Range: [0.0, ∞]
0.05
slpBarStartAllowInteriorSol If a non-vertex converged solution found by barrier without crossover can be returned as a final solution
0: Disable
1: Enable
1
slpBarStartStallingNumerical Fall back to dual simplex if too many numerical problems are reported by the barrier
0: Disable
1: Enable
1
slpBarStartStallingObjective Check objective progress when no crossover is applied
0: Disable
1: Enable
1

## NLP Penalty Options

Option Description Default
slpDeltaCost Initial penalty cost multiplier for penalty delta vectors
If penalty delta vectors are used, this parameter sets the initial cost factor. If there are active penalty delta vectors, then the penalty cost may be increased.
Range: [0.0, ∞]
200.0
slpDeltaCostFactor Factor for increasing cost multiplier on total penalty delta vectors
If there are active penalty delta vectors, then the penalty cost multiplier will be increased by a factor of slpDeltaCostFactor up to a maximum of slpDeltaMaxCost.
Range: [1.0, ∞]
1.3
slpDeltaMaxCost Maximum penalty cost multiplier for penalty delta vectors
If there are active penalty delta vectors, then the penalty cost multiplier will be increased by a factor of slpDeltaCostFactor up to a maximum of slpDeltaMaxCost.
Range: [0.0, ∞]
1.0e+20
slpEnforceCostShrink Factor by which to decrease the current penalty multiplier when enforcing rows
When feasibility of a row cannot be achieved by increasing the penalty cost on its error variable, removing the variable (fixing it to zero) can force the row to be satisfied, as set by slpEnforceMaxCost. After the error variables have been removed (which is equivalent to setting to row to be enforced) the penalties on the remaining error variables are rebalanced to allow for a reduction in the size of the penalties in the objective in order to achieve better numerical behaviour.
Range: [0.0, 1.0]
1.0e-05
slpEnforceMaxCost Maximum penalty cost in the objective before enforcing most violating rows
When feasibility of a row cannot be achieved by increasing the penalty cost on its error variable, removing the variable (fixing it to zero) can force the row to be satisfied. After the error variables have been removed (which is equivalent to setting to row to be enforced) the penalties on the remaining error variables are rebalanced to allow for a reduction in the size of the penalties in the objective in order to achieve better numerical behaviour, controlled by slpEnforceCostShrink.
Range: [0.0, ∞]
1.0e+11
slpErrorCost Initial penalty cost multiplier for penalty error vectors
If penalty error vectors are used, this parameter sets the initial cost factor. If there are active penalty error vectors, then the penalty cost may be increased.
Range: [0.0, ∞]
200.0
slpErrorCostFactor Factor for increasing cost multiplier on total penalty error vectors
If there are active penalty error vectors, then the penalty cost multiplier will be increased by a factor of slpErrorCostFactor up to a maximum of slpErrorMaxCost.
Range: [1.0, ∞]
1.3
slpErrorMaxCost Maximum penalty cost multiplier for penalty error vectors
If there are active penalty error vectors, then the penalty cost multiplier will be increased by a factor of slpErrorCostFactor up to a maximum of slpErrorMaxCost.
Range: [0.0, ∞]
1.0e+20
slpErrorTolA Absolute tolerance for error vectors
The solution will be regarded as having no active error vectors if one of the following applies: every penalty error vector and penalty delta vector has an activity less than slpErrorTolA; the sum of the cost contributions from all the penalty error and penalty delta vectors is less than slpEVTolA; the sum of the cost contributions from all the penalty error and penalty delta vectors is less than slpEVTolR * Obj where Obj is the current objective function value.
Range: [0.0, ∞]
1.0e-05
slpErrorTolP Absolute tolerance for printing error vectors
The solution log includes a print of penalty delta and penalty error vectors with an activity greater than slpErrorTolP.
Range: [0.0, ∞]
1.0e-04
slpEscalation Factor for increasing cost multiplier on individual penalty error vectors
If penalty cost escalation is activated in slpAlgorithmCascadeBounds, ..., slpAlgorithmSwitchToPrimal then the penalty cost multiplier will be increased by a factor of slpEscalation for any active error vector up to a maximum of slpMaxWeight.
Range: [1.0, ∞]
1.2
slpETolA Absolute tolerance on penalty vectors
For each penalty error vector, the contribution to its constraint is calculated, together with the total positive and negative contributions to the constraint from other vectors. If its contribution is less than slpETolA or less than Positive*slpETolR or less than abs(Negative)*slpETolR then it will be regarded as insignificant and will not have its penalty increased. When the value is set to be negative, the value is adjusted automatically by SLP, based on the feasibility target slpValidationTargetR. Good values for the control are usually fall between 1e-3 and 1e-6.
Range: [-∞, ∞]
auto
slpETolR Relative tolerance on penalty vectors
See slpETolA. Good values for the control are usually fall between 1e-3 and 1e-6.
Range: [-∞, ∞]
auto
slpEVTolA Absolute tolerance on total penalty costs
The solution will be regarded as having no active error vectors if one of the following applies: every penalty error vector and penalty delta vector has an activity less than slpErrorTolA; the sum of the cost contributions from all the penalty error and penalty delta vectors is less than slpEVTolA; the sum of the cost contributions from all the penalty error and penalty delta vectors is less than slpEVTolR * Obj where Obj is the current objective function value. When the value is set to be negative, the value is adjusted automatically by SLP, based on the feasibility target slpValidationTargetR. Good values for the control are usually fall between 1e-2 and 1e-6, but normally a magnitude larger than slpETolA.
Range: [-∞, ∞]
auto
slpEVTolR Relative tolerance on total penalty costs
See slpEVTolA. Good values for the control are usually fall between 1e-2 and 1e-6, but normally a magnitude larger than slpETolR.
Range: [-∞, ∞]
auto
slpGranularity Base for calculating penalty costs
If slpGranularity > 1, then initial penalty costs will be powers of slpGranularity.
Range: [1.0, ∞]
4.0
slpMaxWeight Maximum penalty weight for delta or error vectors
When penalty vectors are created, or when their weight is increased by escalation, the maximum weight that will be used is given by slpMaxWeight.
Range: [0.0, ∞]
100.0
slpMinWeight Minimum penalty weight for delta or error vectors
When penalty vectors are created, the minimum weight that will be used is given by slpMinWeight.
Range: [0.0, ∞]
0.01
slpObjToPenaltyCost Factor to estimate initial penalty costs from objective function
The setting of initial penalty error costs can affect the path of the optimization and, indeed, whether a solution is achieved at all. If the penalty costs are too low, then unbounded solutions may result although Xpress-SLP will increase the costs in an attempt to recover. If the penalty costs are too high, then the requirement to achieve feasibility of the linearized constraints may be too strong to allow the system to explore the nonlinear feasible region. Low penalty costs can result in many SLP iterations, as feasibility of the nonlinear constraints is not achieved until the penalty costs become high enough; high penalty costs force feasibility of the linearizations, and so tend to find local optima close to an initial feasible point. Xpress-SLP can analyze the problem to estimate the size of penalty costs required to avoid an initial unbounded solution. slpObjToPenaltyCost can be used in conjunction with this procedure to scale the costs and give an appropriate initial value for balancing the requirements of feasibility and optimality. Not all models are amenable to the Xpress-SLP analysis. As the analysis is initially concerned with establishing a cost level to avoid unboundedness, a model which is sufficiently constrained will never show unboundedness regardless of the cost. Also, as the analysis is done at the start of the optimization to establish a penalty cost, significant changes in the coefficients, or a high degree of nonlinearity, may invalidate the initial analysis. A setting for slpObjToPenaltyCost of zero disables the analysis. A setting of 3 or 4 has proved successful for many models. If slpObjToPenaltyCost cannot be used because of the problem structure, its effect can still be emulated by some initial experiments to establish the cost required to avoid unboundedness, and then manually applying a suitable factor. If the problem is initially unbounded, then the penalty cost will be increased until either it reaches its maximum or the problem becomes bounded.
Range: [0.0, ∞]
0.0
slpPenaltyInfoStart Iteration from which to record row penalty information
Information about the size (current and total) of active penalties of each row and the number of times a penalty vector has been active is recorded starting at the SLP iteration number given by slpPenaltyInfoStart.
Range: {0, ..., ∞}
3

## NLP Step Bounds Options

Option Description Default
slpClampShrink Shrink ratio used to impose strict convergence on variables converged in extended criteria only
If the solution has converged but there are variables converged on extended criteria only, the slpClampShrink acts as a shrinking ratio on the step bounds and the problem is optimized (if necessary multiple times), with the purpose of expediting strict convergence on all variables. slpAlgorithmCascadeBounds, ..., slpAlgorithmSwitchToPrimal controls if this shrinking is applied at all, and if shrinking is applied to of the variables converged on extended criteria only with active step bounds only, or if on all variables.
Range: [0.0, 1.0]
0.3
slpClampValidationTolA Absolute validation tolerance for applying slpClampShrink
If set and the absolute validation value is larger than this value, then control slpClampShrink is checked once the solution has converged, but there are variables converged on extended criteria only.
Range: [0.0, ∞]
1.0e-06
slpClampValidationTolR Relative validation tolerance for applying slpClampShrink
If set and the relative validation value is larger than this value, then control slpClampShrink is checked once the solution has converged, but there are variables converged on extended criteria only.
Range: [0.0, ∞]
1.0e-06
slpDefaultStepBound Minimum initial value for the step bound of an SLP variable if none is explicitly given
If no initial step bound value is given for an SLP variable, this will be used as a minimum value. If the algorithm is estimating step bounds, then the step bound actually used for a variable may be larger than the default. A default initial step bound is ignored when testing for the closure tolerance slpCTol: if there is no specific value, then the test will not be applied.
Range: [0.0, ∞]
16.0
slpDJTol Tolerance on DJ value for determining if a variable is at its step bound
If a variable is at its step bound and within the absolute delta tolerance slpATolA or closure tolerance slpCTol then the step bounds will not be further reduced. If the DJ is greater in magnitude than slpDJTol then the step bound may be relaxed if it meets the necessary criteria.
Range: [0.0, ∞]
1.0e-06
slpExpand Multiplier to increase a step bound
If step bounding is enabled, the step bound for a variable will be increased if successive changes are in the same direction. More precisely, if there are slpSameCount successive changes reaching the step bound and in the same direction for a variable, then the step bound ( B) for the variable will be reset to B*slpExpand.
Range: [1.0, ∞]
2.0
slpMinSBFactor Factor by which step bounds can be decreased beneath slpATolA
Normally, step bounds are not decreased beneath slpATolA, as such variables are treated as converged. However, it may be beneficial to decrease step bounds further, as individual variable value changes might affect the convergence of other variables in the model, even if the variable itself is deemed converged.
Range: [0.0, ∞]
1.0
slpSameCount Number of steps reaching the step bound in the same direction before step bounds are increased
If step bounding is enabled, the step bound for a variable will be increased if successive changes are in the same direction. More precisely, if there are slpSameCount successive changes reaching the step bound and in the same direction for a variable, then the step bound ( B) for the variable will be reset to B*slpExpand.
Range: {0, ..., ∞}
3
slpSBStart SLP iteration after which step bounds are first applied
If step bounds are used, they can be applied for the whole of the SLP optimization process, or started after a number of SLP iterations. In general, it is better not to apply step bounds from the start unless one of the following applies: (1) the initial estimates are known to be good, and explicit values can be provided for initial step bounds on all variables; or (2) the problem is unbounded unless all variables are step-bounded.
Range: {0, ..., ∞}
8
slpShrink Multiplier to reduce a step bound
If step bounding is enabled, the step bound for a variable will be decreased if successive changes are in opposite directions. The step bound ( B) for the variable will be reset to B*slpShrink. If the step bound is already below the strict (delta or closure) tolerances, it will not be reduced further.
Range: [0.0, 1.0]
0.5
slpShrinkBias Defines an overwrite / adjustment of step bounds for improving iterations
Positive values overwrite slpShrink only if the objective is improving. A negative value is used to scale all step bounds in improving iterations.
Range: [-∞, ∞]
0.0

## NLP Variable Update Options

Option Description Default
slpDamp Damping factor for updating values of variables
The damping factor sets the next assumed value for a variable based on the previous assumed value ( X0) and the actual value ( X1). The new assumed value is given by X1*slpDamp + X0*(1-slpDamp).
Range: [0.0, 1.0]
1.0
slpDampExpand Multiplier to increase damping factor during dynamic damping
If dynamic damping is enabled, the damping factor for a variable will be increased if successive changes are in the same direction. More precisely, if there are slpSameDamp successive changes in the same direction for a variable, then the damping factor ( D) for the variable will be reset to D*slpDampExpand + slpDampMax*(1-slpDampExpand).
Range: [0.0, 1.0]
1.0
slpDampMax Maximum value for the damping factor of a variable during dynamic damping
If dynamic damping is enabled, the damping factor for a variable will be increased if successive changes are in the same direction. More precisely, if there are slpSameDamp successive changes in the same direction for a variable, then the damping factor ( D) for the variable will be reset to D*slpDampExpand + slpDampMax*(1-slpDampExpand).
Range: [0.0, 1.0]
1.0
slpDampMin Minimum value for the damping factor of a variable during dynamic damping
If dynamic damping is enabled, the damping factor for a variable will be decreased if successive changes are in the opposite direction. More precisely, the damping factor ( D) for the variable will be reset to D*slpDampShrink + slpDampMin*(1-slpDampExpand).
Range: [0.0, 1.0]
1.0
slpDampShrink Multiplier to decrease damping factor during dynamic damping
If dynamic damping is enabled, the damping factor for a variable will be decreased if successive changes are in the opposite direction. More precisely, the damping factor ( D) for the variable will be reset to D*slpDampShrink + slpDampMin*(1-slpDampExpand).
Range: [0.0, 1.0]
1.0
slpDampStart SLP iteration at which damping is activated
If damping is used as part of the SLP algorithm, it can be delayed until a specified SLP iteration. This may be appropriate when damping is used to encourage convergence after an un-damped algorithm has failed to converge.
Range: {0, ..., ∞}
0
slpLSIterLimit Number of iterations in the line search
The line search attempts to refine the step size suggested by the trust region step bounds. The line search is a local method; the control sets a maximum on the number of model evaluations during the line search.
Range: {0, ..., ∞}
0
slpLSPatternLimit Number of iterations in the pattern search preceding the line search
When positive, defines the number of samples taken along the step size suggested by the trust region step bounds before initiating the line search. Useful for highly non-convex problems.
Range: {0, ..., ∞}
0
slpLSStart Iteration in which to active the line search
Range: {0, ..., ∞}
8
slpLSZeroLimit Maximum number of zero length line search steps before line search is deactivated
When the line search repeatedly returns a zero step size, counteracted by bits set on slpFilterKeepBest, ..., slpFilterZeroLineSearchTR, the effort spent in line search is redundant, and line search will be deactivated after slpLSZeroLimit consecutive such iteration.
Range: {0, ..., ∞}
5
slpMeritLambda Factor by which the net objective is taken into account in the merit function
The merit function is evaluated in the original, non-augmented / linearized space of the problem. A solution is deemed improved, if either feasibility improved, or if feasibility is not deteriorated but the net objective is improved, or if the combination of the two is improved, where the value of the slpMeritLambda control is used to combine the two measures. A nonpositive value indicates that the combined effect should not be checked.
Range: [0.0, ∞]
0.0
slpSameDamp Number of steps in same direction before damping factor is increased
If dynamic damping is enabled, the damping factor for a variable will be increased if successive changes are in the same direction. More precisely, if there are slpSameDamp successive changes in the same direction for a variable, then the damping factor ( D) for the variable will be reset to D*slpDampExpand + slpDampMax*(1-slpDampExpand).
Range: {0, ..., ∞}
3

## NLP Termination Options

Option Description Default
slpATolA Absolute delta convergence tolerance
The absolute delta convergence criterion assesses the change in value of a variable ( δX) against the absolute delta convergence tolerance. If δX < slpATolA then the variable has converged on the absolute delta convergence criterion. When the value is set to be negative, the value is adjusted automatically by SLP, based on the feasibility target slpValidationTargetR. Good values for the control are usually fall between 1e-3 and 1e-6.
Range: [-∞, ∞]
auto
slpATolR Relative delta convergence tolerance
The relative delta convergence criterion assesses the change in value of a variable ( δX) relative to the value of the variable ( X), against the relative delta convergence tolerance. If δX < X * slpATolR then the variable has converged on the relative delta convergence criterion. When the value is set to be negative, the value is adjusted automatically by SLP, based on the feasibility target slpValidationTargetR. Good values for the control are usually fall between 1e-3 and 1e-6.
Range: [-∞, ∞]
auto
slpConvergeATol Execute the delta tolerance checks
0: Disable
1: Enable
1
slpConvergeCTol Execute the closure tolerance checks
0: Disable
1: Enable
1
slpConvergeExtendedScaling Take scaling of individual variables / rows into account
0: Disable
1: Enable
0
slpConvergeITol Execute the impact tolerance checks
0: Disable
1: Enable
1
slpConvergeMTol Execute the matrix tolerance checks
0: Disable
1: Enable
1
slpConvergeOTol Execute the objective range + active step bound check
0: Disable
1: Enable
1
slpConvergeSTol Execute the slack impact tolerance checks
0: Disable
1: Enable
1
slpConvergeValidation Execute the validation target convergence checks
0: Disable
1: Enable
1
slpConvergeValidationK Execute the first order optimality target convergence checks
0: Disable
1: Enable
1
slpConvergeVTol Execute the objective range checks
0: Disable
1: Enable
1
slpConvergeWTol Execute the convergence continuation check
0: Disable
1: Enable
1
slpConvergeXTol Execute the objective range + constraint activity check
0: Disable
1: Enable
1
slpCTol Closure convergence tolerance
The closure convergence criterion measures the change in value of a variable ( δX) relative to the value of its initial step bound ( B), against the closure convergence tolerance. If δX < B * slpCTol then the variable has converged on the closure convergence criterion. If no explicit initial step bound is provided, then the test will not be applied and the variable can never converge on the closure criterion. When the value is set to be negative, the value is adjusted automatically by SLP, based on the feasibility target slpValidationTargetR. Good values for the control are usually fall between 1e-3 and 1e-6.
Range: [-∞, ∞]
auto
slpECFCheck Check feasibility at the point of linearization for extended convergence criteria
The extended convergence criteria measure the accuracy of the solution of the linear approximation compared to the solution of the original nonlinear problem. For this to work, the linear approximation needs to be reasonably good at the point of linearization. In particular, it needs to be reasonably close to feasibility. slpECFCheck is used to determine what checking of feasibility is carried out at the point of linearization. If the point of linearization at the start of an SLP iteration is deemed to be infeasible, then the extended convergence criteria are not used to decide convergence at the end of that SLP iteration. If all that is required is to decide that the point of linearization is not feasible, then the search can stop after the first infeasible constraint is found (parameter is set to 1). If the actual number of infeasible constraints is required, then slpECFCheck should be set to 2, and all constraints will be checked.
0: no check (extended criteria are always used);
1: check until one infeasible constraint is found;
2: check all constraints
1
slpECFTolA Absolute tolerance on testing feasibility at the point of linearization
The extended convergence criteria test how well the linearization approximates the true problem. They depend on the point of linearization being a reasonable approximation — in particular, that it should be reasonably close to feasibility. Each constraint is tested at the point of linearization, and the total positive and negative contributions to the constraint from the columns in the problem are calculated. A feasibility tolerance is calculated as the largest of slpECFTolA and max(abs(Positive), abs(Negative)) * slpECFTolR If the calculated infeasibility is greater than the tolerance, the point of linearization is regarded as infeasible and the extended convergence criteria will not be applied. When the value is set to be negative, the value is adjusted automatically by SLP, based on the feasibility target slpValidationTargetR. Good values for the control are usually fall between 1e-1 and 1e-6.
Range: [-∞, ∞]
auto
slpECFTolR Relative tolerance on testing feasibility at the point of linearization
See slpECFTolA. Good values for the control are usually fall between 1e-1 and 1e-6.
Range: [-∞, ∞]
auto
slpInfeasLimit The maximum number of consecutive infeasible SLP iterations which can occur before Xpress-SLP terminates
An infeasible solution to an SLP iteration means that is likely that Xpress-SLP will create a poor linear approximation for the next SLP iteration. Sometimes, small infeasibilities arise because of numerical difficulties and do not seriously affect the solution process. However, if successive solutions remain infeasible, it is unlikely that Xpress-SLP will be able to find a feasible converged solution. slpInfeasLimit sets the number of successive SLP iterations which must take place before Xpress-SLP terminates with a status of "infeasible solution".
Range: {0, ..., ∞}
3
slpIterLimit The maximum number of SLP iterations
If Xpress-SLP reaches slpIterLimit without finding a converged solution, it will stop. For MISLP, the limit is on the number of SLP iterations at each node.
Range: {0, ..., ∞}
1000
slpITolA Absolute impact convergence tolerance
The absolute impact convergence criterion assesses the change in the effect of a coefficient in a constraint. The effect of a coefficient is its value multiplied by the activity of the column in which it appears. E = X * C.
Range: [-∞, ∞]
auto
slpITolR Relative impact convergence tolerance
The relative impact convergence criterion assesses the change in the effect of a coefficient in a constraint in relation to the magnitude of the constituents of the constraint. The effect of a coefficient is its value multiplied by the activity of the column in which it appears. E = X * C.
Range: [-∞, ∞]
auto
slpMTolA Absolute effective matrix element convergence tolerance
The absolute effective matrix element convergence criterion assesses the change in the effect of a coefficient in a constraint. The effect of a coefficient is its value multiplied by the activity of the column in which it appears. E = X * C.
Range: [-∞, ∞]
auto
slpMTolR Relative effective matrix element convergence tolerance
The relative effective matrix element convergence criterion assesses the change in the effect of a coefficient in a constraint relative to the magnitude of the coefficient. The effect of a coefficient is its value multiplied by the activity of the column in which it appears. E = X * C.
Range: [-∞, ∞]
auto
slpMVTol Marginal value tolerance for determining if a constraint is slack
If the absolute value of the marginal value of a constraint is less than slpMVTol, then (1) the constraint is regarded as not constraining for the purposes of the slack tolerance convergence criteria; (2) the constraint is not regarded as an active constraint when identifying unconverged variables in active constraints. When the value is set to be negative, the value is adjusted automatically by SLP, based on the feasibility target slpValidationTargetR. Good values for the control are usually fall between 1e-3 and 1e-6.
Range: [-∞, ∞]
auto
slpOCount Number of SLP iterations over which to measure objective function variation for static objective (2) convergence criterion
The static objective convergence criterion does not measure convergence of individual variables. Instead, it measures the significance of the changes in the objective function over recent SLP iterations. It is applied when all the variables interacting with active constraints (those that have a marginal value of at least slpMVTol) have converged. The rationale is that if the remaining unconverged variables are not involved in active constraints and if the objective function is not changing significantly between iterations, then the solution is more-or-less practical. The variation in the objective function is defined as δObj = MAXIter(Obj) - MINIter(Obj).
Range: {0, ..., ∞}
5
slpOTolA Absolute static objective (2) convergence tolerance
The static objective convergence criterion does not measure convergence of individual variables. Instead, it measures the significance of the changes in the objective function over recent SLP iterations. It is applied when all the variables interacting with active constraints (those that have a marginal value of at least slpMVTol) have converged. The rationale is that if the remaining unconverged variables are not involved in active constraints and if the objective function is not changing significantly between iterations, then the solution is more-or-less practical. The variation in the objective function is defined as δObj = MAXIter(Obj) - MINIter(Obj).
Range: [-∞, ∞]
auto
slpOTolR Relative static objective (2) convergence tolerance
See slpOTolA.
Range: [-∞, ∞]
auto
slpSTolA Absolute slack convergence tolerance
The slack convergence criterion is identical to the impact convergence criterion, except that the tolerances used are slpSTolA (instead of slpITolA) and slpSTolR (instead of slpITolR). See slpITolA for a description of the test. When the value is set to be negative, the value is adjusted automatically by SLP, based on the feasibility target slpValidationTargetR. Good values for the control are usually fall between 1e-3 and 1e-6.
Range: [-∞, ∞]
auto
slpSTolR Relative slack convergence tolerance
See slpSTolA. Good values for the control are usually fall between 1e-3 and 1e-6.
Range: [-∞, ∞]
auto
slpStopOutOfRange Stop optimization and return error code if internal function argument is out of range
If slpStopOutOfRange is set to 1, then if an internal function receives an argument which is out of its allowable range (for example, LOG of a negative number), an error code is set and the optimization is terminated.
Range: {0, ..., 1}
0
slpValidationTargetK Optimality target tolerance
Primary optimality control for SLP. When the relevant optimality based convergence controls are left at their default values, SLP will adjust their value to match the target. The control defines a target value, that may not necessarily be attainable for problem with no strong constraint qualifications.
Range: [0.0, ∞]
1.0e-06
slpValidationTargetR Feasiblity target tolerance
Primary feasiblity control for SLP. When the relevant feasibility based convergence controls are left at their default values, SLP will adjust their value to match the target. The control defines a target value, that may not necessarily be attainable.
Range: [0.0, ∞]
1.0e-06
slpVCount Number of SLP iterations over which to measure static objective (3) convergence
The static objective convergence criterion does not measure convergence of individual variables, and in fact does not in any way imply that the solution has converged. However, it is sometimes useful to be able to terminate an optimization once the objective function appears to have stabilized. One example is where a set of possible schedules are being evaluated and initially only a good estimate of the likely objective function value is required, to eliminate the worst candidates. The variation in the objective function is defined as δObj = MAXIter(Obj) - MINIter(Obj).
Range: {0, ..., ∞}
0
slpVLimit Number of SLP iterations after which static objective (3) convergence testing starts
Range: {0, ..., ∞}
0
slpVTolA Absolute static objective (3) convergence tolerance
Range: [-∞, ∞]
auto
slpVTolR Relative static objective (3) convergence tolerance
Range: [-∞, ∞]
auto
slpWCount Number of SLP iterations over which to measure the objective for the extended convergence continuation criterion
It may happen that all the variables have converged, but some have converged on extended criteria and at least one of these variables is at its step bound. This means that, at least in the linearization, if the variable were to be allowed to move further the objective function would improve. This does not necessarily imply that the same is true of the original problem, but it is still possible that an improved result could be obtained by taking another SLP iteration. The extended convergence continuation criterion is applied after a converged solution has been found where at least one variable has converged on extended criteria and is at its step bound limit. The extended convergence continuation test measures whether any improvement is being achieved when additional SLP iterations are carried out. If not, then the last converged solution will be restored and the optimization will stop. For a maximization problem, the improvement in the objective function at the current iteration compared to the objective function at the last converged solution is given by: δObj = Obj - LastConvergedObj For a minimization problem, the sign is reversed. If δObj > slpWTolA and δObj > ABS(ConvergedObj) * slpWTolR then the solution is deemed to have a significantly better objective function value than the converged solution. When a solution is found which converges on extended criteria and with active step bounds, the solution is saved and SLP optimization continues until one of the following: (1) a new solution is found which converges on some other criterion, in which case the SLP optimization stops with this new solution; (2) a new solution is found which converges on extended criteria and with active step bounds, and which has a significantly better objective function, in which case this is taken as the new saved solution; (3) none of the slpWCount most recent SLP iterations has a significantly better objective function than the saved solution, in which case the saved solution is restored and the SLP optimization stops. If slpWCount is zero, then the extended convergence continuation criterion is disabled.
Range: {0, ..., ∞}
0
slpWTolA Absolute extended convergence continuation tolerance
See slpWCount. When the value is set to be negative, the value is adjusted automatically by SLP, based on the optimality target slpValidationTargetK. Good values for the control are usually fall between 1e-4 and 1e-6.
Range: [-∞, ∞]
auto
slpWTolR Relative extended convergence continuation tolerance
See slpWCount. When the value is set to be negative, the value is adjusted automatically by SLP, based on the optimality target slpValidationTargetK. Good values for the control are usually fall between 1e-4 and 1e-6.
Range: [-∞, ∞]
auto
slpXCount Number of SLP iterations over which to measure static objective (1) convergence
It may happen that all the variables have converged, but some have converged on extended criteria and at least one of these variables is at its step bound. This means that, at least in the linearization, if the variable were to be allowed to move further the objective function would improve. This does not necessarily imply that the same is true of the original problem, but it is still possible that an improved result could be obtained by taking another SLP iteration. However, if the objective function has already been stable for several SLP iterations, then there is less likelihood of an improved result, and the converged solution can be accepted. The static objective function (1) test measures the significance of the changes in the objective function over recent SLP iterations. It is applied when all the variables have converged, but some have converged on extended criteria and at least one of these variables is at its step bound. Because all the variables have converged, the solution is already converged but the fact that some variables are at their step bound limit suggests that the objective function could be improved by going further. The variation in the objective function is defined as δObj = MAXIter(Obj) - MINIter(Obj) where Iter is the slpXCount most recent SLP iterations and Obj is the corresponding objective function value. If ABS(δObj) ≤ slpXTolA then the objective function is deemed to be static according to the absolute static objective function (1) criterion. If ABS(δObj) ≤ AVGIter(Obj) * slpXTolR then the objective function is deemed to be static according to the relative static objective function (1) criterion. The static objective function (1) test is applied only until slpXLimit SLP iterations have taken place. After that, if all the variables have converged on strict or extended criteria, the solution is deemed to have converged. If the objective function passes the relative or absolute static objective function (1) test then the solution is deemed to have converged.
Range: {0, ..., ∞}
5
slpXLimit Number of SLP iterations up to which static objective (1) convergence testing starts
See slpXCount.
Range: {0, ..., ∞}
100
slpXTolA Absolute static objective function (1) tolerance
See slpXCount. When the value is set to be negative, the value is adjusted automatically by SLP, based on the optimality target slpValidationTargetK. Good values for the control are usually fall between 1e-4 and 1e-6.
Range: [-∞, ∞]
auto
slpXTolR Relative static objective function (1) tolerance
slpXTolA See slpXCount. When the value is set to be negative, the value is adjusted automatically by SLP, based on the optimality target slpValidationTargetK. Good values for the control are usually fall between 1e-4 and 1e-6.
Range: [-∞, ∞]
auto

## NLP Multistart Options

Option Description Default
slpMSMaxBoundRange Defines the maximum range inside which initial points are generated by multistart presets
The is the maximum range in which initial points are generated; the actual range is expected to be smaller as bounds are domains are also considered.
Range: [0.0, ∞]
1000.0
slpMultistartMaxSolves The maximum number of jobs to create during the multistart search
Range: {-1, ..., ∞}
auto
slpMultistartMaxTime The maximum total time to be spent in the mutlistart search
Xpress-SLP_MAXTIME applies on a per job instance basis. There will be some time spent even after slpMultistartMaxTime has elapsed, while the running jobs get terminated and their results collected.
Range: {0, ..., ∞}
0
slpMultistartPoolsize The maximum number of problem objects allowed to pool up before synchronization in the deterministic multistart
Deterministic multistart is ensured by guaranteeing that the multistart solve results are evaluated in the same order every time. Solves that finish too soon can be pooled until all earlier started solves finish, allowing the system to start solving other multistart instances in the meantime on idle threads. Larger pool sizes will provide better speedups, but will require larger amounts of memory. Positive values are interpreted as a multiplier on the maximum number of active threads used, while negative values are interpreted as an absolute limit (and the absolute value is used). A value of zero will mean no result pooling.
Range: {0, ..., ∞}
2
slpMultistartPreset Enable multistart
0: Disable multistart preset.
1: Generate slpMultistartMaxSolves number of random base points.
2: Generate slpMultistartMaxSolves number of random base points, filtered by a merit function centred on initial feasibility.
3: Load the most typical SLP tuning settings. A maximum of slpMultistartMaxSolves jobs are loaded.
4: Load a comprehensive set of SLP tuning settings. A maximum of slpMultistartMaxSolves jobs are loaded.
0
slpMultistartSeed Random seed used for the automatic generation of initial point when loading multistart presets
Range: {-∞, ..., ∞}
0
slpMultistartThreads The maximum number of threads to be used in multistart
The current hard upper limit on the number of threads to be sued in multistart is 64. When set to -1 (auto), threads is used.
Range: {-1, ..., ∞}
auto

## NLP Derivative Options

Option Description Default
slpCDTolA Absolute tolerance for deducing constant derivatives
The absolute tolerance test for constant derivatives is used as follows: If the value of the user function at point X0 is Y0 and the values at (X0-δX) and (X0+δX) are Yd and Yu respectively, then the numerical derivatives at X0 are: "down" derivative Dd = (Y0 - Yd) / δX "up" derivative Du = (Yu - Y0) / δX If abs(Dd-Du) ≤ slpCDTolA then the derivative is regarded as constant.
Range: [0.0, ∞]
1.0e-08
slpCDTolR Relative tolerance for deducing constant derivatives
See slpCDTolA. If abs(Dd-Du) ≤ slpCDTolR * abs(Yd+Yu)/2 then the derivative is regarded as constant.
Range: [0.0, ∞]
1.0e-08
slpDeltaA Absolute perturbation of values for calculating numerical derivatives
First-order derivatives are calculated by perturbing the value of each variable in turn by a small amount. The amount is determined by the absolute and relative delta factors as follows: slpDeltaA + abs(X)*slpDeltaR where ( X) is the current value of the variable. If the perturbation takes the variable outside a bound, then the perturbation normally made only in the opposite direction.
Range: [0.0, ∞]
0.001
slpDeltaR Relative perturbation of values for calculating numerical derivatives
See slpDeltaA.
Range: [0.0, ∞]
0.001
slpDeltaZ Tolerance used when calculating derivatives
If the absolute value of a variable is less than this value, then a value of slpDeltaZ will be used instead for calculating derivatives. If a nonzero derivative is calculated for a formula which always results in a matrix coefficient less than slpDeltaZ, then a larger value will be substituted so that at least one of the coefficients is slpDeltaZ in magnitude. If slpDeltaZLimit is set to a positive number, then when that number of iterations have passed, values smaller than slpDeltaZ will be set to zero.
Range: [0.0, ∞]
1.0e-05
slpDeltaZero Absolute zero acceptance tolerance used when calculating derivatives
Provides an override value for the slpDeltaZ behavior. Derivatives smaller than slpDeltaZero will not be substituted by slpDeltaZ, defining a range in which derivatives are deemed nonzero and are affected by slpDeltaZ. A negative value means that this tolerance will not be applied.
Range: [-∞, ∞]
-1.0
slpDeltaZLimit Number of SLP iterations during which to apply slpDeltaZ
slpDeltaZ is used to retain small derivatives which would otherwise be regarded as zero. This is helpful in avoiding local optima, but may make the linearized problem more difficult to solve because of the number of small nonzero elements in the resulting matrix. slpDeltaZLimit can be set to a nonzero value, which is then the number of iterations for which slpDeltaZ will be used. After that, small derivatives will be set to zero. A negative value indicates no automatic perturbations to the derivatives in any situation.
Range: {0, ..., ∞}
0
slpDerivatives Bitmap describing the method of calculating derivatives
If no bits are set then numerical derivatives are calculated using finite differences. Analytic derivatives cannot be used for formulae involving discontinuous functions. They may not work well with functions which are not smooth (such as MAX), or where the derivative changes very quickly with the value of the variable (such as LOG of small values). Both first and second order analytic derivatives can either be calculated as symbolic formulas, or by the means of auto-differentiation, with the exception that the second order symbolic derivatives require that the first order derivatives are also calculated using the symbolic method.
0: analytic derivatives where possible
1: avoid embedding numerical derivatives of instantiated functions into analytic derivatives
1
slpHessian Second order differentiation mode when using analytical derivatives
-1: automatic
0: automatic
1: numerical derivatives (finite difference)
2: symbolic differentiation
3: automatic differentiation
auto
slpJacobian First order differentiation mode when using analytical derivatives
-1: automatic
0: automatic
1: numerical derivatives (finite difference)
2: symbolic differentiation
3: automatic differentiation
auto

## NLP Log Options

Option Description Default
slpAnalyzeAutosavePool Save the solutions collected in the pool to disk
0: Disable
1: Enable
0
slpAnalyzeExtendedFinalSummary Include an extended iteration summary
0: Disable
1: Enable
0
slpAnalyzeInfeasibleIteration Run infeasibility analysis on infeasible iterations
0: Disable
1: Enable
0
slpAnalyzeRecordLinearization Add solutions of the linearizations to the solution pool
0: Disable
1: Enable
0
slpAnalyzeRecordLinesearch Add line search solutions to the solution pool
0: Disable
1: Enable
0
slpAnalyzeSaveFile Create an Xpress-SLP save file at every slpAutosave iterations
0: Disable
1: Enable
0
slpAnalyzeSaveIterBasis Write the initial basis of the linearizations to disk at every slpAutosave iterations
0: Disable
1: Enable
0
slpAnalyzeSaveLinearizations Write the linearizations to disk at every slpAutosave iterations
0: Disable
1: Enable
0
slpAutosave Frequency with which to save the model
A value of zero means that the model will not automatically be saved. A positive value of n will save model information at every nth SLP iteration as requested by slpAnalyzeAutosavePool, ..., slpAnalyzeSaveLinearizations.
Range: {0, ..., ∞}
0
slpLog Level of printing during SLP iterations
-1: none
0: minimal
1: normal: iteration, penalty vectors
2: omit from convergence log any variables which have converged
3: omit from convergence log any variables which have already converged (except variables on step bounds)
4: include all variables in convergence log
5: include user function call communications in the log
0
slpLogFreq Frequency with which SLP status is printed
If slpLog is set to zero (minimal logging) then a nonzero value for slpLogFreq defines the frequency (in SLP iterations) when summary information is printed out.
Range: {0, ..., ∞}
1
slpTimePrint Print additional timings during SLP optimization
Date and time printing can be useful for identifying slow procedures during the SLP optimization. Setting slpTimePrint to 1 prints times at additional points during the optimization.
Range: {0, ..., 1}
0

## MINLP Options

Option Description Default
mislpAlgorithmFinalFixSLP Fix step bounds according to mislpFixStepBoundsCoef, ..., mislpFixStepBoundsStructNotCoef after MIP solution is found
0: Disable
1: Enable
0
mislpAlgorithmFinalRelaxSLP Relax step bounds according to mislpRelaxStepBoundsCoef, ..., mislpRelaxStepBoundsStructNotCoef after MIP solution is found
0: Disable
1: Enable
0
mislpAlgorithmInitialFixSLP Fix step bounds according to mislpFixStepBoundsCoef, ..., mislpFixStepBoundsStructNotCoef after initial node
0: Disable
1: Enable
0
mislpAlgorithmInitialRelaxSLP Relax step bounds according to mislpRelaxStepBoundsCoef, ..., mislpRelaxStepBoundsStructNotCoef after initial node
0: Disable
1: Enable
0
mislpAlgorithmInitialSLP Solve initial SLP to convergence
0: Disable
1: Enable
1
mislpAlgorithmNodeFixSLP Fix step bounds according to mislpFixStepBoundsCoef, ..., mislpFixStepBoundsStructNotCoef at each node
0: Disable
1: Enable
0
mislpAlgorithmNodeLimitSLP Limit iterations at each node to mislpIterLimit
0: Disable
1: Enable
0
mislpAlgorithmNodeRelaxSLP Relax step bounds according to mislpRelaxStepBoundsCoef, ..., mislpRelaxStepBoundsStructNotCoef at each node
0: Disable
1: Enable
1
mislpAlgorithmSLPThenMIP Use MIP on converged SLP solution and then SLP on the resulting MIP solution
0: Disable
1: Enable
0
mislpAlgorithmWithinSLP Use MIP at each SLP iteration instead of SLP at each node
0: Disable
1: Enable
0
mislpCutOffA Absolute objective function cutoff for MIP termination
If the objective function is worse by a defined amount than the best integer solution obtained so far, then the SLP will be terminated (and the node will be cut off). The node will be cut off at the current SLP iteration if the objective function for the last mislpCutOffCount SLP iterations are all worse than the best obtained so far, and the difference is greater than mislpCutOffA and OBJ * mislpCutOffR where OBJ is the best integer solution obtained so far. The MIP cutoff tests are only applied after mislpCutOffLimit SLP iterations at the current node.
Range: [0.0, ∞]
1.0e-05
mislpCutOffCount Number of SLP iterations to check when considering a node for cutting off
See mislpCutOffA.
Range: {0, ..., ∞}
5
mislpCutOffLimit Number of SLP iterations to check when considering a node for cutting off
See mislpCutOffA.
Range: {0, ..., ∞}
10
mislpCutOffR Absolute objective function cutoff for MIP termination
See mislpCutOffA.
Range: [0.0, ∞]
1.0e-05
mislpCutStrategy Determines which cuts to apply in the MISLP search when the default SLP-in-MIP strategy is used
Cuts are derived from the linearizations and are local cuts in that they are valid in the linearization and not necessarily valid for the full problem. The values mirror that of cutStrategy.
Range: {-1, ..., 3}
0
mislpDefaultAlgorithm Default algorithm to be used during the global search in MISLP
The default algorithm used within SLP during the MISLP optimization can be set using mislpDefaultAlgorithm. It will not necessarily be the same as the one best suited to the initial SLP optimization.
Range: {1, ..., 5}
3
mislpErrorTolA Absolute penalty error cost tolerance for MIP cut-off
The penalty error cost test is applied at each node where there are active penalties in the solution. If mislpErrorTolA is nonzero and the absolute value of the penalty costs is greater than mislpErrorTolA, the node will be declared infeasible. If mislpErrorTolA is zero then no test is made and the node will not be declared infeasible on this criterion.
Range: [0.0, ∞]
0.0
mislpErrorTolR Relative penalty error cost tolerance for MIP cut-off
The penalty error cost test is applied at each node where there are active penalties in the solution. If mislpErrorTolR is nonzero and the absolute value of the penalty costs is greater than mislpErrorTolR * abs(Obj) where Obj is the value of the objective function, then the node will be declared infeasible. If mislpErrorTolR is zero then no test is made and the node will not be declared infeasible on this criterion.
Range: [0.0, ∞]
0.0
mislpFixStepBoundsCoef Fix step bounds on SLP variables appearing in coefficients
0: Disable
1: Enable
0
mislpFixStepBoundsCoefOnly Fix step bounds on SLP variables appearing only in coefficients
0: Disable
1: Enable
0
mislpFixStepBoundsStructAll Fix step bounds on all structural SLP variables
0: Disable
1: Enable
0
mislpFixStepBoundsStructNotCoef Fix step bounds on structural SLP variables which are not in coefficients
0: Disable
1: Enable
0
mislpHeurStrategy Branch and Bound: This specifies the MINLP heuristic strategy. On some problems it is worth trying more comprehensive heuristic strategies by setting mislpHeurStrategy to 2 or 3
-1: Automatic selection of heuristic strategy
0: No heuristics
1: Basic heuristic strategy
2: Enhanced heuristic strategy
3: Extensive heuristic strategy
4: Run all heuristics without effort limits
-1
mislpIterLimit Maximum number of SLP iterations at each node
If bit 6 of mislpAlgorithmFinalFixSLP, ..., mislpAlgorithmWithinSLP is set, then the number of iterations at each node will be limited to mislpIterLimit.
Range: {0, ..., ∞}
0
mislpLog Frequency with which MIP status is printed
By default (zero or negative value) the MIP status is printed after syncronization points. If mislpLog is set to a positive integer, then the current MIP status (node number, best value, best bound) is printed every mislpLog nodes.
Range: {0, ..., ∞}
0
mislpOCount Number of SLP iterations at each node over which to measure objective function variation
The objective function test for MIP termination is applied only when step bounding has been applied (or slpSBStart SLP iterations have taken place if step bounding is not being used). The node will be terminated at the current SLP iteration if the range of the objective function values over the last mislpOCount SLP iterations is within mislpOTolA or within OBJ * mislpOTolR where OBJ is the average value of the objective function over those iterations.
Range: {0, ..., ∞}
5
mislpOTolA Absolute objective function tolerance for MIP termination
See mislpOCount.
Range: [0.0, ∞]
1.0e-05
mislpOTolR Relative objective function tolerance for MIP termination
See mislpOCount.
Range: [0.0, ∞]
1.0e-05
mislpRelaxStepBoundsCoef Relax step bounds on SLP variables appearing in coefficients
0: Disable
1: Enable
1
mislpRelaxStepBoundsCoefOnly Relax step bounds on SLP variables appearing only in coefficients
0: Disable
1: Enable
1
mislpRelaxStepBoundsStructAll Relax step bounds on all structural SLP variables
0: Disable
1: Enable
1
mislpRelaxStepBoundsStructNotCoef Relax step bounds on structural SLP variables which are not in coefficients
0: Disable
1: Enable
1

# Helpful Hints

The comments below should help both novice and experienced GAMS users to better understand and make use of GAMS/XPRESS.

• Infeasible and unbounded models The fact that a model is infeasible/unbounded can be detected at two stages: during the presolve and during the simplex or barrier algorithm. In the first case we cannot recover a solution, nor is any information regarding the infeasible/unbounded constraint or variable provided (at least in a way that can be returned to GAMS). In such a situation, the GAMS link will automatically rerun the model using primal simplex with presolve turned off (this can be avoided by setting the rerun option to 0). It is possible (but very unlikely) that the simplex method will solve a model to optimality while the presolve claims the model is infeasible/unbounded (due to feasibility tolerances in the simplex and barrier algorithms).
• The barrier method does not make use of iterlim. Use bariterlim in an options file instead. The number of barrier iterations is echoed to the log and listing file. If the barrier iteration limit is reached during the barrier algorithm, XPRESS continues with a simplex algorithm, which will obey the iterlim setting.
• Semi-integer variables are not implemented in the link, nor are they supported by XPRESS; if present, they trigger an error message.
• SOS1 and SOS2 variables are required by XPRESS to have lower bounds of 0 and nonnegative upper bounds.

# Setting up a GAMS/XPRESS-Link license

To use the GAMS/XPRESS solver with a GAMS/XPRESS-Link license you have to set up the XPRESS portion of the licensing. To do so, copy your XPRESS license xpauth.xpr to the GAMS system directory. As of version 24.2 GAMS already comes with a file xpauth.xpr . You might consider copying this file to xpauth.xpr.bak or similar before overwriting it with your own XPRESS license file.