Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Unconstrained Constraints

A key aspect to Amazolve solving and to QUBO models in general are unconstrained constraints (i.e., the "U" in QUBO). In linear programming (LP) formulations, constraints are generally hard. For example, take a constraint such as "5 employees with skill set A must be on staff at all times". A LP formulation would have a "hard" requirement that summed up the number of employees with that skill set A and would reject any possible solution that did not have that. A hard, all or nothing sort of constraint.

An unconstrained model would "penalize" the solution's score if the constraint is not met, but allow the solution to continue being investigated by the solver. So, given the same example, an unconstrained solver would attach a penalty factor, say 1000, to the number of employees under the required amount. If a potential solution only had 3 employees on a given day with the required skill set (where 5 are required), then the 2000 would be added to the score. Assuming we are minimizing the solution score, then a solution with 4 employees would score better than one with 3 (1000 vs. 2000) and therefore "push" the solver in the direction of a better solution.

This is how all constraints, and for that matter, all problems are defined in Amazolve. Potential solutions are scored given penalty factors and objective functions, and are either minimized or maximized.

Unconstraints In Amazolve

There are two key aspects to understand in the Amazolve problem definition (PD) that define the scoring and penalty aspect of unconstrained optimization:

  • Expression functions EQ() and IQ(), or "equality" and "inequality" respectively. As their names imply, these functions define equality and inequality relationships amongst problem variables. But they do more than that - they also return how unequal the variables are. So, for the aforementioned employee case, a constraint expression with EQ(emp_count,5) would return 2 if there were only 3 employees on staff.

    Similarly, if the constraint description stated that between 3 and 5 employees must be on staff with a given skill set, the constraint expression may look like IQ(3,emp_count,5), corresponding to the inequality: \( 3 \le emp\_count \le 5\). If there were only 1 employee on staff, this would return 2, and if there were 7, it would also return 2.

    The IQ and EQ expression functions also serve to mark equality and inequality relationships used to formulate linear programming solutions. So while PD expressions also support = and != operators, for example, these do not behave the same as the inequality expression functions.

  • penaltyVar: Within the PD, any constraint can be assigned a penaltyVar which refers to a constant defined within the PD's constants section. This is multiplied by whatever the constraint's sum result is. For the IQ and EQ examples mentioned above, this would be multiplied by the 2 values returned for example.

Next, is a complete example of a constraint with an inequality and a hard penalty. This is a complex constraint so all details need not be understood at this point, but just a few key points:

  • The penaltyVar for the constraint is hardPenalty which is defined in the constants block below.
  • The exprMain (main expression) of the second sum applies the inequality function. Note the left side of the inequality is the ~ symbol, which in the Amazolve PD refers to infinity. This constraint is from the nurse scheduling sample and implements the requirement that a given nurse can only be assigned a maximum number of shifts of a given type described in an external array max_shifts. The IQ in this case is declaring "the total number of shifts must be less than or equal to the given max". If the max_shifts is exceeded, the count over is returned by IQ and then multiplied by 1000.
{
  "constraint": {
    "CID": "C3",
    "comment": "Constraint #3 - Max Shifts",
    "penaltyVar": "hardPenalty",
    "sumIter": "iterDim",
    "iterDim": "R",
    "iterVars": [ "r" ],
    "sums": [
      {
        "sumIter": "iterDim",
        "iterDim": "SZ",
        "exprMain": "IQ(~, sum_shifts, max_shifts(r, s))",
        "iterVars": [ "s" ],
        "sums": [
          {
            "sumIter": "iterDim",
            "iterDim": "T",
            "exprMain": "A(r, t) = s",
            "iterVars": [ "t" ],
            "resultVar": "sum_shifts"
          }
        ]
      }
    ]
  }
},
{
  "constants": [
      {
        "name": "hardPenalty",
        "def": 1000
      }
    ]
}