Skip to content

Throwing LinearOperatorException when storage types cannot be promoted #382

Merged
tmigot merged 8 commits intoJuliaSmoothOptimizers:mainfrom
arnavk23:main
Mar 11, 2026
Merged

Throwing LinearOperatorException when storage types cannot be promoted #382
tmigot merged 8 commits intoJuliaSmoothOptimizers:mainfrom
arnavk23:main

Conversation

@arnavk23
Copy link
Copy Markdown
Contributor

@arnavk23 arnavk23 commented Oct 18, 2025

I fixed the "storage types cannot be promoted to a concrete type" error by making the code robust when promote_type(storage_type(op1), storage_type(op2)) yields a non-concrete type. The fix falls back to a concrete operand storage type if promotion yields an abstract type, or uses Vector{T} as a last resort.
This allows GPU-backed operators and mixed-storage scenarios to work without hard failures.

Connected to #329

@tmigot
Copy link
Copy Markdown
Member

tmigot commented Oct 19, 2025

What does the example in the issue do with this PR?

@arnavk23
Copy link
Copy Markdown
Contributor Author

arnavk23 commented Oct 19, 2025

@tmigot julia> using CUDA, NLPModels, NLPModelsTest, LinearOperators

julia> V = CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}
CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}

julia> nlp = BROWNDEN(V)
BrowndenNLPModel - Model with 4 variables and 0 constraints

julia> nvar = nlp.meta.nvar
4

julia> xc = V(undef, nvar)
4-element CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}:
0.0
0.0
0.0
0.0

julia> Hs = V(undef, nvar)
4-element CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}:
0.0
0.0
0.0
0.0

julia> H = hess_op!(nlp, xc, Hs)
Linear operator
nrow: 4
ncol: 4
eltype: Float64
symmetric: true
hermitian: true

julia> cg_op_diag = V(undef, nvar)
4-element CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}:
0.0
0.0
0.0
0.0

julia> cg_op = opDiagonal(cg_op_diag)
Linear operator
nrow: 4
ncol: 4
eltype: Float64
symmetric: true
hermitian: false

julia> ZHZ = cg_op' * H * cg_op
Linear operator
nrow: 4
ncol: 4
eltype: Float64
symmetric: false
hermitian: false

julia> size(ZHZ)
(4, 4)

julia> typeof(ZHZ)
LinearOperator{Float64, Int64, var"#..."#..., var"#..."#..., var"#..."#..., CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}

julia> v = V(undef, nvar); v .= 1.0;

julia> result = ZHZ * v
4-element CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}:
0.0
0.0
0.0
0.0

…at.jl consistently (tests still pass functionally; only the known two tiny allocation assertions remain).
@codecov
Copy link
Copy Markdown

codecov Bot commented Oct 24, 2025

Codecov Report

❌ Patch coverage is 61.53846% with 5 lines in your changes missing coverage. Please review.
✅ Project coverage is 93.77%. Comparing base (32dbc5e) to head (3dfa5ff).
⚠️ Report is 48 commits behind head on main.

Files with missing lines Patch % Lines
src/abstract.jl 44.44% 5 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #382      +/-   ##
==========================================
- Coverage   95.00%   93.77%   -1.23%     
==========================================
  Files          17       20       +3     
  Lines        1100     1156      +56     
==========================================
+ Hits         1045     1084      +39     
- Misses         55       72      +17     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copilot AI review requested due to automatic review settings February 20, 2026 23:21
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes failures in operator–operator multiplication when promote_type(storage_type(op1), storage_type(op2)) yields a non-concrete type (notably in GPU-backed scenarios like #329), by selecting an alternative concrete storage type for internal temporaries.

Changes:

  • Replaced the hard error on non-concrete storage type promotion in *(op1, op2) with a helper-based selection.
  • Added _select_storage_type(op1, op2, T) to fall back to a concrete operand storage type (or Vector{T} as a last resort).

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
src/operations.jl Uses _select_storage_type to choose temporary storage for operator–operator * without throwing on non-concrete promotion.
src/abstract.jl Introduces _select_storage_type fallback logic for choosing a concrete storage type.
Comments suppressed due to low confidence (1)

src/operations.jl:146

  • S is used to allocate vtmp, which is written by mul!(vtmp, op2, v) inside prod_op!. When promote_type(storage_type(op1), storage_type(op2)) is non-concrete, _select_storage_type currently prefers storage_type(op1) if it’s concrete; this can still fail for mixed-storage cases (e.g., storage_type(op1) == Vector{T} but op2 wraps a CuArray matrix, whose mul! expects a CuArray output). Consider selecting a storage type that is compatible with the operator being written into (at least prefer storage_type(op2) for vtmp), or allocate separate temporaries for the op2-output vs op1-output paths instead of forcing a single S.
  S = _select_storage_type(op1, op2, T)
  #tmp vector for products
  vtmp = fill!(S(undef, m2), zero(T))
  utmp = fill!(S(undef, n1), zero(T))
  wtmp = fill!(S(undef, n1), zero(T))

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/operations.jl Outdated
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@arnavk23
Copy link
Copy Markdown
Contributor Author

arnavk23 commented Feb 22, 2026

@tmigot Pinging you on this as I recently rebased this and fixed copilot suggestion. Review whenever you are free. Thanks!

Comment thread src/operations.jl
@tmigot
Copy link
Copy Markdown
Member

tmigot commented Feb 26, 2026

Hi @arnavk23 ! To be honest, I am not a huge fan of this PR. I prefer to raise an error than doing some implicit casting. This needs to be thought very carefully.

@arnavk23
Copy link
Copy Markdown
Contributor Author

Hi @arnavk23 ! To be honest, I am not a huge fan of this PR. I prefer to raise an error than doing some implicit casting. This needs to be thought very carefully.

Thank you for stating that. Actually I hadn't thought about that. I was thinking of so many hard solutions for this, forgot the easiest one. Have made the change :)

Remove _select_storage_type function that was silently falling back to
alternative storage types when promotion failed. This implicit behavior
could cause severe performance degradation (e.g., GPU->CPU fallback).

Now throws a helpful LinearOperatorException when storage types cannot
be promoted to a concrete type, guiding users to ensure compatible
storage types across operators.

This aligns with Julia's philosophy of making performance characteristics
explicit and prevents mysterious performance issues.
@arnavk23
Copy link
Copy Markdown
Contributor Author

arnavk23 commented Mar 3, 2026

@tmigot Tangi if you could also add this to your review list, that'll be great. Thank you for your reviews.

@tmigot
Copy link
Copy Markdown
Member

tmigot commented Mar 7, 2026

If I understand correctly, this PR now does just give a more useful information ? If so, could you update the PR title?

@arnavk23 arnavk23 changed the title Storage type promotion Throwing LinearOperatorException when storage types cannot be promoted Mar 7, 2026
@arnavk23
Copy link
Copy Markdown
Contributor Author

@tmigot Tangi changed the title, I think this is good-to-go now.

@tmigot tmigot merged commit 571ca22 into JuliaSmoothOptimizers:main Mar 11, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants