Skip to content

Fix GDPopt LOA maximization objective sense#3938

Open
bernalde wants to merge 2 commits into
Pyomo:mainfrom
SECQUOIA:fix/issue-3937-loa-maximize
Open

Fix GDPopt LOA maximization objective sense#3938
bernalde wants to merge 2 commits into
Pyomo:mainfrom
SECQUOIA:fix/issue-3937-loa-maximize

Conversation

@bernalde
Copy link
Copy Markdown
Contributor

Summary

  • Preserve the original objective sense when GDPopt LOA creates the augmented OA master objective.
  • Add regression coverage for a tiny GDP where minimize(-x) and maximize(x) should select the same disjunct.

Tests run

  • python -m pytest -q pyomo/contrib/gdpopt/tests/test_gdpopt.py::TestGDPoptUnit::test_loa_augmented_penalty_objective_preserves_objective_sense pyomo/contrib/gdpopt/tests/test_gdpopt.py::TestGDPoptUnit::test_LOA_maximize_matches_minimize_negated_objective - passed, 2 passed in 0.86s
  • python -m pytest -q pyomo/contrib/gdpopt/tests/test_gdpopt.py - passed, 62 passed, 19 skipped, 3 deselected in 34.42s
  • python -m black --check --diff pyomo/contrib/gdpopt/loa.py pyomo/contrib/gdpopt/tests/test_gdpopt.py - passed, 2 files would be left unchanged

Notes

  • The full repository pytest suite was not run locally because the targeted GDPopt module covers the changed behavior and the full suite is substantially broader than this focused change.

Closes #3937

@bernalde
Copy link
Copy Markdown
Contributor Author

Verified the fix against the GDPLib methanol instance that motivated #3937.

Setup:

  • Pyomo branch: fix/issue-3937-loa-maximize
  • GDPLib source: SECQUOIA/gdplib main, imported with PYTHONPATH=/tmp/gdplib-methanol-3937
  • GDPopt call: algorithm="LOA", mip_solver="gams", nlp_solver="gams"

Results:

  • minimize(-profit): optimal, profit 1793.4292381783353, 15 iterations
  • maximize(profit): optimal, profit 1793.4292381783353, 15 iterations

Both formulations selected the same active disjuncts:

cheap_reactor
expensive_feed_disjunct
single_stage_recycle_compressor_disjunct
two_stage_feed_compressor_disjunct

This directly addresses the original failure mode: the maximization form no longer terminates at the negative-profit local solution and now matches the minimized negative-profit formulation for this benchmark.

Copy link
Copy Markdown
Contributor Author

@bernalde bernalde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would approve this review, but GitHub does not allow the authenticated author account to approve its own pull request.

No blocking issues found.

Reviewed the LOA objective construction, the objective-sense-aware bound update path, the new focused unit/regression tests, and the repository test conventions. The change is narrowly scoped and preserves the existing penalty expression behavior while ensuring the OA master objective has the same sense as the original objective.

Tests run:

  • python -m pytest -q pyomo/contrib/gdpopt/tests/test_gdpopt.py::TestGDPoptUnit::test_loa_augmented_penalty_objective_preserves_objective_sense pyomo/contrib/gdpopt/tests/test_gdpopt.py::TestGDPoptUnit::test_LOA_maximize_matches_minimize_negated_objective - 2 passed
  • python -m pytest -q pyomo/contrib/gdpopt/tests/test_gdpopt.py - 62 passed, 19 skipped, 3 deselected
  • python -m pytest -q pyomo/contrib/gdpopt/tests - 76 passed, 35 skipped, 5 deselected
  • python -m black --check --diff pyomo/contrib/gdpopt/loa.py pyomo/contrib/gdpopt/tests/test_gdpopt.py - passed, files unchanged
  • GDPLib methanol reproducer with algorithm="LOA", mip_solver="gams", nlp_solver="gams" - both minimize(-profit) and maximize(profit) returned optimal profit 1793.4292381783353 in 15 iterations with the same active disjuncts

Merge-ready from this review.

Copy link
Copy Markdown
Contributor

@emma58 emma58 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is kind of an embarrassing bug--thanks for catching it! A couple comments below, though neither is major.

discrete_problem_util_block.oa_obj = Objective(
expr=0, sense=discrete_objective.sense
)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to set an expr here since it's a placeholder.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 3262583. The OA placeholder is now expressionless, and its sense is assigned explicitly after construction so the later augmented objective keeps the original sense.

Comment on lines +333 to +334
solver = SolverFactory('gdpopt.loa')
original_obj = solver._setup_augmented_penalty_objective(m.GDPopt_utils)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a big fan of using private methods in tests since it makes for a lot ofediting if/when they change, but there may not be a great way around this one.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not changed. I kept this focused unit test because it directly isolates the objective-sense setup, while the adjacent public LOA solve regression still covers the behavior through solve(). I also reran the public path manually with Gurobi after the commit.

@bernalde
Copy link
Copy Markdown
Contributor Author

Addressed the current PR review comments.

Commits pushed:

  • 326258353c04f81ad49d31add5b954069fb13901 Remove dummy LOA objective expression

Main changes made:

  • Replaced the dummy expr=0 OA objective placeholder with an expressionless Objective().
  • Set the placeholder objective sense explicitly after construction so the later augmented objective still preserves the original objective sense.

Tests run:

  • conda run -n pyomo-local-test python -m pytest -q pyomo/contrib/gdpopt/tests/test_gdpopt.py::TestGDPoptUnit::test_loa_augmented_penalty_objective_preserves_objective_sense pyomo/contrib/gdpopt/tests/test_gdpopt.py::TestGDPoptUnit::test_LOA_maximize_matches_minimize_negated_objective -> 1 passed, 1 skipped. The solve regression is skipped by the test guard because the configured test MIP solver is unavailable in this env.
  • conda run -n pyomo-local-test python -m pytest -q pyomo/contrib/gdpopt/tests/test_gdpopt.py -> 7 passed, 74 skipped, 3 deselected.
  • conda run -n pyomo-local-test python -m black --check --diff pyomo/contrib/gdpopt/loa.py pyomo/contrib/gdpopt/tests/test_gdpopt.py -> 2 files would be left unchanged.
  • Manual public LOA regression with mip_solver="gurobi", nlp_solver="gurobi" for both minimize(-x) and maximize(x) -> both optimal with x = 9.

Comments intentionally not addressed:

  • The private-method unit test was left in place. It is a focused unit regression for the objective-sense construction, and the adjacent public LOA solve regression still covers the behavior through solve(). The public path was also checked manually with Gurobi because the configured test MIP solver is unavailable in this env.

Remaining risks or follow-up items:

  • No known remaining PR-review items. Local test coverage has many expected skips in this environment due missing optional solvers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

GDPopt LOA appears to mishandle maximization objective sense

3 participants