Skip to content

Add diffengine as DCP canonicalization backend with parameter support#185

Draft
Transurgeon wants to merge 46 commits intomasterfrom
feat/diffengine-dcp-backend
Draft

Add diffengine as DCP canonicalization backend with parameter support#185
Transurgeon wants to merge 46 commits intomasterfrom
feat/diffengine-dcp-backend

Conversation

@Transurgeon
Copy link
Copy Markdown
Member

Introduces the diffengine (sparsediffpy) as an alternative canonicalization backend for the DCP→Cone path, selected via canon_backend='DIFFENGINE'. Unlike the standard tensor pipeline, this backend evaluates the C expression DAG directly at x=0 to extract A, b, q, d, P matrices. Parameters are supported via update_params + re-evaluation on subsequent solves, enabling efficient DPP re-solving.

Description

Please include a short summary of the change.
Issue link (if applicable):

Type of change

  • New feature (backwards compatible)
  • New feature (breaking API changes)
  • Bug fix
  • Other (Documentation, CI, ...)

Contribution checklist

  • Add our license to new files.
  • Check that your code adheres to our coding style.
  • Write unittests.
  • Run the unittests and check that they’re passing.
  • Run the benchmarks to make sure your change doesn’t introduce a regression.

Transurgeon and others added 12 commits March 27, 2026 14:07
Enable cp.Parameter objects to be treated as updatable nodes in the C
expression DAG instead of being baked in as constants. On re-solve with
new parameter values, the cached DAG and sparsity structures are reused
via problem._nlp_cache, analogous to DPP for conic programs.

Requires SparseDiffPy PR #10 for make_parameter, problem_register_params,
and problem_update_params bindings. Tests skip gracefully until installed.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove diag_mat, upper_tri, vstack, and kron converters and their tests.

Simplify solve_nlp parameter caching to mirror the best_of pattern:
always re-run the full reduction chain, cache only the solver_cache
(Oracles) on problem._nlp_cache between solve() calls.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace build_theta(params, inverse_data) with simple concatenation.
Add build_param_id_map() helper. Replace inverse_data parameter with
param_id_map dict throughout converters, C_problem, and Oracles.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Will revisit testing after SparseDiffPy parameter bindings are available.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace scattered var_dict/n_vars/param_id_map threading with a single
ConvertContext class built from InverseData. All atom converters now
have uniform (expr, children, ctx) signature.

Remove _is_parametric, _PARAMETRIC_CONVERTERS, build_param_id_map,
build_variable_dict, convert_expressions. C_problem creates its own
InverseData internally.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Move matmul/multiply parameter handling to ConvertContext methods.
All other converters keep their original (expr, children) signature
untouched. convert_expr dispatches matmul/multiply specially via ctx.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Keep all atom converters with their original (expr, children) signatures.
Only matmul and multiply are handled via ConvertContext methods.
ATOM_CONVERTERS dict entries preserved exactly as original.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Move update_params call into ipopt solve_via_data (where oracles are
reused). Remove _get_solver_cache helper and _nlp_cache dict. Store
solver_cache in the existing problem._solver_cache['NLP'] — zero
changes to problem.py.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Move theta construction into Oracles.update_params(problem). Remove
build_theta from converters. Add update_params call to all four NLP
solvers (IPOPT, Knitro, Uno, COPT) on oracle reuse path.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add test_nlp_parameters.py with 3 tests (scalar, vector, matrix)
- Initialize parameter values after registration in C_problem
- Add oracles.update_params() call to all NLP solvers on cache reuse
- Use problem._solver_cache['NLP'] for oracle caching
- Restore original docstrings in c_problem.py

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The parameter-bindings branch of SparseDiffPy always requires
param_or_none as the first arg to matmul functions. Pass it
unconditionally (None for non-parametric).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@Transurgeon Transurgeon force-pushed the feat/diffengine-dcp-backend branch from ab79296 to 000315b Compare March 28, 2026 04:01
@Transurgeon Transurgeon marked this pull request as draft March 28, 2026 04:01
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Mar 28, 2026

Benchmarks that have stayed the same:

   before           after         ratio
 [bb47d394]       [3bce24db]
      362±0ms          373±0ms     1.03  slow_pruning_1668_benchmark.SlowPruningBenchmark.time_compile_problem
      827±0ms          850±0ms     1.03  gini_portfolio.Cajas.time_compile_problem
      513±0ms          526±0ms     1.03  semidefinite_programming.SemidefiniteProgramming.time_compile_problem
      1.58±0s          1.62±0s     1.02  finance.FactorCovarianceModel.time_compile_problem
      296±0ms          302±0ms     1.02  gini_portfolio.Murray.time_compile_problem
      17.6±0s          17.9±0s     1.02  finance.CVaRBenchmark.time_compile_problem
      2.00±0s          2.02±0s     1.01  quantum_hilbert_matrix.QuantumHilbertMatrix.time_compile_problem
      3.02±0s          3.03±0s     1.01  simple_QP_benchmarks.UnconstrainedQP.time_compile_problem
      29.8±0s          29.8±0s     1.00  sdp_segfault_1132_benchmark.SDPSegfault1132Benchmark.time_compile_problem
      1.10±0s          1.09±0s     1.00  simple_LP_benchmarks.SimpleScalarParametrizedLPBenchmark.time_compile_problem
      14.0±0s          14.0±0s     1.00  simple_LP_benchmarks.SimpleLPBenchmark.time_compile_problem
      189±0ms          189±0ms     1.00  high_dim_convex_plasticity.ConvexPlasticity.time_compile_problem
      5.60±0s          5.58±0s     1.00  huber_regression.HuberRegression.time_compile_problem
      5.19±0s          5.17±0s     1.00  optimal_advertising.OptimalAdvertising.time_compile_problem
      1.34±0s          1.33±0s     1.00  matrix_stuffing.ParamConeMatrixStuffing.time_compile_problem
      6.64±0s          6.61±0s     1.00  svm_l1_regularization.SVMWithL1Regularization.time_compile_problem
      274±0ms          273±0ms     1.00  matrix_stuffing.ParamSmallMatrixStuffing.time_compile_problem
      1.12±0s          1.12±0s     1.00  simple_QP_benchmarks.LeastSquares.time_compile_problem
      778±0ms          773±0ms     0.99  matrix_stuffing.ConeMatrixStuffingBench.time_compile_problem
      1.79±0s          1.77±0s     0.99  tv_inpainting.TvInpainting.time_compile_problem
     38.8±0ms         38.5±0ms     0.99  matrix_stuffing.SmallMatrixStuffing.time_compile_problem
      277±0ms          274±0ms     0.99  simple_QP_benchmarks.SimpleQPBenchmark.time_compile_problem
     14.7±0ms         14.5±0ms     0.99  simple_QP_benchmarks.ParametrizedQPBenchmark.time_compile_problem
     15.0±0ms         14.6±0ms     0.97  simple_LP_benchmarks.SimpleFullyParametrizedLPBenchmark.time_compile_problem
      403±0ms          392±0ms     0.97  gini_portfolio.Yitzhaki.time_compile_problem

Introduces the diffengine (sparsediffpy) as an alternative canonicalization
backend for the DCP→Cone path, selected via canon_backend='DIFFENGINE'.
Unlike the standard tensor pipeline, this backend evaluates the C expression
DAG directly at x=0 to extract A, b, q, d, P matrices. Parameters are
supported via update_params + re-evaluation on subsequent solves, enabling
efficient DPP re-solving.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@Transurgeon Transurgeon force-pushed the feat/diffengine-dcp-backend branch from 000315b to 22851ec Compare March 29, 2026 04:32
Transurgeon and others added 13 commits March 29, 2026 14:53
Keep master's convert_expr(expr, var_dict, n_vars) pattern, adding
param_dict as an optional arg. Matmul/multiply stay as standalone
functions rather than class methods.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- helpers.py: shared utilities, matmul helpers, var/param dict builders
- registry.py: atom converter functions and ATOM_CONVERTERS dict
- converters.py: convert_expr entry point and param-aware matmul/multiply

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Syncs the diffengine DCP backend with the parameter-support branch's
refactored converter architecture (ConvertContext -> flat functions,
converters.py split into helpers.py + registry.py + converters.py).
Updates diffengine_cone_program.py to use build_var_dict/build_param_dict.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Transurgeon and others added 10 commits April 7, 2026 20:19
- Move DIFFENGINE_CANON_BACKEND to settings.py with other backend constants
- Extract build_capsule() in converters.py to deduplicate capsule-building
  logic between C_problem (NLP path) and DiffengineConeProgram (DCP path)
- Use inverse_data.param_id_map instead of manually rebuilding param_id_to_col
- Fix flatten order='C' -> 'F' in apply_parameters for consistency
- Replace complex _symmetrize_hessian caching with simple P + P.T - diag()

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…import

- Remove unnecessary NonPos import and branch from lower_and_order_constraints
  (added by a previous session, not in master, deprecated constraint type)
- Rename DIFFENGINE_CANON_BACKEND -> DIFFENGINE_BACKEND
- Replace lazy _get_diffengine() with direct import in diffengine_cone_program

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace separate variables, var_id_to_col, param_id_to_col, and n_vars
constructor params with a single inverse_data object. Parameter metadata
(param_id_to_col, param_id_to_size) and variable mappings now read
directly from inverse_data instead of being rebuilt.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Move build_capsule and extract_lower/upper_bounds imports to top of file
- Remove de = _diffengine aliases, use _diffengine directly
- Remove double np.asarray(np.array(...)) wrapping
- Remove unused self._n_vars, read from inverse_data.x_length
- Replace unused param_dict with _

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Remove self._x0 caching, create zeros inline each call
- Move Hessian computation into quad_obj return branch
- Remove intermediate P = None / if P is not None pattern

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Keep only externally-accessed properties (variables, var_id_to_col,
id_to_var, param_id_to_col). Inline constr_size and total_param_size
checks directly.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Rename _A, _b, _q, _d, _inverse_data to A, b, q, d, inverse_data
for consistency with x and P which were already public.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Base automatically changed from feat/dnlp-parameter-support to master April 11, 2026 19:41
@Transurgeon Transurgeon marked this pull request as ready for review April 11, 2026 20:11
Transurgeon and others added 7 commits April 11, 2026 17:38
# Conflicts:
#	cvxpy/reductions/dcp2cone/cone_matrix_stuffing.py
#	cvxpy/reductions/solvers/nlp_solvers/diff_engine/__init__.py
#	cvxpy/reductions/solvers/nlp_solvers/diff_engine/c_problem.py
#	cvxpy/reductions/solvers/nlp_solvers/diff_engine/converters.py
#	cvxpy/reductions/solvers/nlp_solvers/diff_engine/helpers.py
#	cvxpy/reductions/solvers/nlp_solvers/diff_engine/registry.py
#	cvxpy/reductions/solvers/nlp_solvers/nlp_solver.py
#	cvxpy/tests/nlp_tests/test_nlp_parameters.py
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Transurgeon and others added 3 commits April 22, 2026 11:25
#190)

* fix diff engine converter bugs: 1D matmul dims, reshape, transpose, quad form

- helpers.py: fix 1D dimension normalization in dense matmul helpers.
  Right-matmul treated 1D A as (1,n) instead of (n,1), causing segfaults.
- converters.py: reshape 1D child to column vector in left-matmul for
  dot products; add constant-atom fallback for unfolded expressions;
  move QuadForm/SymbolicQuadForm dispatch out of ATOM_CONVERTERS (needs
  n_vars); tolerate 1D dimension mismatches via reshape instead of error.
- registry.py: support C-order reshape via transpose decomposition;
  fix 1D transpose to be a no-op; improve scalar quad form P handling;
  raise NotImplementedError for vector SymbolicQuadForm (TODO: native
  block quadform in SparseDiffPy).
- perspective_canon.py: handle DiffengineConeProgram which stores
  q, d, A, b as separate arrays (vs ParamConeProg sparse tensors).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* adds converters changes for right matmul support

* fixes crashes for left matmul

* add changes to registry

* run linter

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
# Conflicts:
#	cvxpy/reductions/solvers/nlp_solvers/diff_engine/converters.py
#	cvxpy/reductions/solvers/nlp_solvers/diff_engine/registry.py
The [tool.uv.sources] block pinned sparsediffpy to ../SparseDiffPy for
local dev, but it also applied in CI where that path doesn't exist,
causing `uv pip install -e .` to fail with "Distribution not found".
The required bindings have shipped to PyPI under the existing
>= 0.2.2, < 0.3.0 range, so the override is no longer needed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants