Skip to content

Surface Grid Refinement#594

Merged
fhagemann merged 24 commits intoJuliaPhysics:mainfrom
claudiaalvgar:RefineSurface_Grid
Apr 21, 2026
Merged

Surface Grid Refinement#594
fhagemann merged 24 commits intoJuliaPhysics:mainfrom
claudiaalvgar:RefineSurface_Grid

Conversation

@claudiaalvgar
Copy link
Copy Markdown
Collaborator

@claudiaalvgar claudiaalvgar commented Feb 17, 2026

This PR introduces surface-only grid refinement for electric potential simulations and fixes boundary-related indexing issues that could lead to segmentation faults at high refinement resolutions e.g for 1e-5m grid spacing. In addition, it also fixes an existent issue when calculating the electric potential with custom grids for small grid spacing. For custom small grid spacing (e.g. 2e-5m), the detector appeared to be undepleted while for bigger grid spacing (1e-4m) the behaviour was correct. This issue was not related to the new RCC layer model since this was also spotted for simulations not including the RCC layer model. See the issue below:
---> With RCC layer custom initial grid:

  • 0.02mm grid spacing: wrong behaviour (I attach the code in case this issue has to be reproduced):
T = Float64 
sim = Simulation{T}(SSD_examples[:IVCIlayer])
calculate_electric_potential!(sim ,max_n_iterations = 10 ,grid= Grid(sim) ,verbose = true ,depletion_handling = true)
g = sim.electric_potential.grid
ax1, ax2, ax3 = g.axes
tick_dis=0.02*mm 
user_additional_ticks_ax1=sort(vcat(ax1.interval.left:tick_dis:ax1.interval.right)) 
user_additional_ticks_ax3=sort(vcat(ax3.interval.left:tick_dis:ax3.interval.right))[2:end] # only support even number of ticks in z-direction 
user_ax1 = typeof(ax1)(ax1.interval, user_additional_ticks_ax1) 
user_ax3 = typeof(ax3)(ax3.interval, user_additional_ticks_ax3) 
user_g = typeof(g)((user_ax1, ax2, user_ax3)) 
calculate_electric_potential!(sim, refinement_limits = [0.2, 0.1, 0.05, 0.02], use_nthreads=8, grid = user_g, depletion_handling = true)
InvertedCoax_PR3_tick0 02mm_refinements * 0.1mm grid spacing custom initial grid: correct behaviour: InvertedCoax_PR3_tick0 1mm_refinements

---> Without RCC layer but using custom grid: same issue was observed:

  • 0.02mm custom grid spacing: wrong behaviour
InvertedCoax_noinactivelayer_PR3_tick0 02mm_refinements * 0.1mm custom grid spacing: correct behaviour InvertedCoax_noinactivelayer_PR3_tick0 1mm_refinements The new implementation checks whether a given slice of point types contains any surface (inactive layer) points and builds a new refined grid across all three axes. Surface refinement is automatically triggered during the electric potential calculation when:
  • A surface impurity model is present

  • At least 3 refinement steps are used in the electric potential calculation

  • Final refinement limit ≤ 0.05

A YAML entry was added to configure spacing_surface_refinement, that controls the biggest grid spacing allowed in the surface for the 3 axis. The new method identifies the surface regions and it just refines in this area as can be seen, for example for a 0.1mm spacing:
Having this spacing_surface_refinement: [1e-4, 1e-4, 1e-4] in the yaml file and running the default: calculate_electric_potential!(sim, use_nthreads=8, grid=g, verbose = true, depletion_handling = true) will now give the correct behaviour for the surface layer, or even: calculate_electric_potential!(sim, refinement_limits = [0.2, 0.1, 0.05, 0.02], use_nthreads=8, grid = user_g, depletion_handling = true) if we want to further refine the bulk:

DeadLayerGrid_norefinements_1e-4min_spacingdeadlayer_surfafter_3rdref_grids DeadLayerGrid_norefinements_1e-4min_spacingdeadlayer_surfafter_3rdref_griddensity DeadLayerGrid_norefinements_1e-4min_spacingdeadlayer_surfafter_3rdref_epot The weighting potential is also well behaved even when passing the custom grid from the surface-refined electric potential: DeadLayerGrid_norefinements_1e-4min_spacingdeadlayer_surfafter_3rdref_wp_customgrid_from_epot

This provides a faster calculation O(3min) than for the custom grid case O(10min) since the initial grid was refined from the very beginning, see old case behaviour for custom grid:

DeadLayerGrid_norefinements_custominitialgrid0 1mm_griddensity Custom_grid_0 1mm_noref_nosurfref_epot The surface refinement is applied after three initial bulk refinements (the default refinement setup for the electric potential calculation) because only at that stage does the surface behaviour become well-resolved. Therefore, this model requires the default bulk refinement setup in order to correctly apply the surface refinement: Default: refinement_limits = [0.2, 0.1, 0.05]) + surface refinement. Applying the surface refinement after the first or second refinement step does not fully capture the surface layer, leading to incomplete refinement, as you can see :
  • Surface Refinement after 2nd refinement uses [0.2, 0.1, -surface ref- 0.05]) and min_spacing = (1e-4, 1e-4, 1e-4)m in surface layer
DeadLayerGrid_norefinements_1e-4min_spacingdeadlayer_surfafter_2ndref
  • Initial surface refinement (this uses -surface ref- + [0.2, 0.1, 0.05]) and min_spacing = (1e-4, 1e-4, 1e-4)m in surface layer
DeadLayerGrid_norefinements_1e-4min_spacingdeadlayer_surfbefore This PR implements the default behaviour (3 refinements) plus the surface refinement, ensuring accurate resolution near surfaces. Additionally, the model now supports refining surface intervals down to 1e-5m, preserving correct physical behaviour and avoiding the incorrect results that can occur with custom grids resulting in undepleted detectors.
  • New method 0.02mm spacing: Correct behaviour
DeadLayerGrid_norefinements_2e-5min_spacingdeadlayer_surfafter_3rdref_grids DeadLayerGrid_norefinements_2e-5min_spacingdeadlayer_surfafter_3rdref_griddensity DeadLayerGrid_norefinements_2e-5min_spacingdeadlayer_surfafter_3rdref_epot

NOTE: this model has not been tested on cartesian grid defined detectors but I wanted to ask for inputs before. I wasn't sure neither if this should have been a Draft PR, so feel free to move it.

@fhagemann
Copy link
Copy Markdown
Collaborator

fhagemann commented Feb 17, 2026

I need to see when I can find enough time to look at this in detail.
Are there any tests you could add to test the new code for it's correct behavior (When it should be throwing warnings, edge cases, etc.) that you could add here?

@claudiaalvgar
Copy link
Copy Markdown
Collaborator Author

Sure no problem. This code is yet a bit preliminary since it doesn't handle the 3D case i.e. it won't refine the phi axis (this I found a bit tricky) and I also didn't try the model on cartesian detectors but I think this should work

@fhagemann fhagemann added enhancement Improvement of existing features notation Notation and Interface convenience Improve user-friendliness labels Feb 18, 2026
@fhagemann fhagemann marked this pull request as draft February 18, 2026 16:04
@claudiaalvgar
Copy link
Copy Markdown
Collaborator Author

Update: when doing some tests with other detectors, I saw that refining only inside the surface was sometimes not enough since it didn't capture the edges very well. See this example for a BEGe detector where the surface goes beyond the detector limit:
BEGe_DeadLayerGrid_norefinements_1e-4min_spacingdeadlayer_surfafter_3rdref_epot

This was solved in my last commit by refining also in a margin around the surface layer (the variables extra_before and extra_after were added, that I set to be 5 intervals around the surface), with this, we get a much better definition of the surface:

BEGe_DeadLayerGrid_norefinements_1e-4min_spacingdeadlayer_surfafter_3rdref_epot_codecorrection BEGe_DeadLayerGrid_norefinements_1e-4min_spacingdeadlayer_surfafter_3rdref_griddensity_codecorrection

@claudiaalvgar
Copy link
Copy Markdown
Collaborator Author

I've now tested this model on cartesian defined detectors and also on partial-phi defined detectors. The surface refinement is disabled in phi angle. I also added tests for edge cases and I think if all tests pass we can move this to normal PR review

@claudiaalvgar claudiaalvgar marked this pull request as ready for review February 27, 2026 09:00
Comment thread src/Simulation/Simulation.jl Outdated
Comment thread examples/example_config_files/ivc_inactivelayer.yaml
Comment thread src/PotentialCalculation/SuccessiveOverRelaxation/CPU_innerloop.jl Outdated
Comment on lines +196 to +213
# Cartesian: inner loop over x (1st dimension)
i1_safe = clamp(i1, 1, size(ϵ_r, 1))
in1_safe = clamp(in1, 1, size(ϵ_r, 1))
i2_safe = clamp(i2, 1, size(ϵ_r, 2))
in2_safe = clamp(in2, 1, size(ϵ_r, 2))
i3_safe = clamp(i3, 1, size(ϵ_r, 3))
in3_safe = clamp(in3, 1, size(ϵ_r, 3))
# The inner loop (over i1) is along the x-Dimension (Cartesian Case),
# which is the 1rd dimension for Cartesian coordinates: (x, y, z)
return @inbounds begin
ϵ_r[ in1, in2, in3 ],
ϵ_r[ in1, in2, i3 ],
ϵ_r[ in1, i2, in3 ],
ϵ_r[ in1, i2, i3 ],
ϵ_r[ i1, in2, in3 ],
ϵ_r[ i1, in2, i3 ],
ϵ_r[ i1, i2, in3 ],
ϵ_r[ i1, i2, i3 ]
ϵ_r[in1_safe, in2_safe, in3_safe],
ϵ_r[in1_safe, in2_safe, i3_safe],
ϵ_r[in1_safe, i2_safe, in3_safe],
ϵ_r[in1_safe, i2_safe, i3_safe],
ϵ_r[ i1_safe, in2_safe, in3_safe],
ϵ_r[ i1_safe, in2_safe, i3_safe],
ϵ_r[ i1_safe, i2_safe, in3_safe],
ϵ_r[ i1_safe, i2_safe, i3_safe]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same thing here.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still need to implement this

Comment thread src/PotentialCalculation/Refinement.jl
Comment thread src/Simulation/Simulation.jl Outdated
Comment thread src/Simulation/Simulation.jl Outdated
Comment thread src/Simulation/Simulation.jl Outdated
Comment thread src/Simulation/Simulation.jl Outdated
Comment thread src/Simulation/Simulation.jl Outdated
@claudiaalvgar
Copy link
Copy Markdown
Collaborator Author

Since there are many changes suggested I commented the ones I've added in the new commits to keep track of them

@claudiaalvgar
Copy link
Copy Markdown
Collaborator Author

I think I've now implemented all the changes, let me know if there's still something missing/ or to improve

Comment thread src/Simulation/Simulation.jl Outdated

"""
refine_surface!(sim::Simulation{T};
min_spacing::NTuple{3,T} = (T(1e-4), T(1e-4), T(1e-4)),
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On my end, it still looks like this is not done (?)

Suggested change
min_spacing::NTuple{3,T} = (T(1e-4), T(1e-4), T(1e-4)),
min_spacing::NTuple{3,T} = (T(1e-4), T(1e-4), T(1e-4));

Comment thread src/World/World.jl Outdated
Comment thread src/World/World.jl
Comment thread src/World/World.jl Outdated
Comment thread src/Simulation/Simulation.jl Outdated

old_grid = sim.electric_potential.grid

# Enforce minimum and maximum spacing for surface refinement (10 µm < min_spacing < 1 mm)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be nice to make this depend on the dimensions of the world.
I wouldn't be surprised if smaller/larger worlds would be better off with smaller/larger values.

Comment thread src/Simulation/Simulation.jl Outdated
for d in 1:3
if length(old_grid.axes[d].ticks) != 1
if min_spacing[d] < T(1e-5) || min_spacing[d] > T(1e-3)
@warn "min_spacing[$d] = $(min_spacing[d]) m is out of bounds (1e-5 m < min_spacing < 1e-3 m). Higher values don't require surface refinement"
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See previous comment

Comment thread src/PotentialCalculation/SuccessiveOverRelaxation/CPU_innerloop.jl Outdated
@claudiaalvgar
Copy link
Copy Markdown
Collaborator Author

The _safe indices in get_ϵ_of_oktant functions I needed to introduce to prevent out-of-bounds access. For very small surface refinements, the code probes points slightly beyond the limits, so some indices like in1, in2, in3 can fall outside the valid range, I don’t know why the code behaves like this but I tried other solutions and none of them worked.
Clamping just forces those indices back inside the grid so we can safely read ϵᵣ. The issue only appears for small refinements because for the rest of refinements the boundaries are within limits, you can reproduce this issue if you want by running for spacing_surface_refinement: [1e-5, 1e-5, 1e-5] in ivc_inactivelayer.yaml with the default get_ϵ_of_oktant functions (it’ll give a SegFault after approximately an hour since calculating the electric potential is slow for very small grids). I’m open for other solutions I just couldn’t find any other better given the logic of the code

Comment thread src/World/World.jl
Comment thread src/World/World.jl Outdated
Comment thread src/World/World.jl Outdated
Comment thread examples/example_config_files/ivc_inactivelayer.yaml
@fhagemann
Copy link
Copy Markdown
Collaborator

The _safe indices in get_ϵ_of_oktant functions I needed to introduce to prevent out-of-bounds access. For very small surface refinements, the code probes points slightly beyond the limits, so some indices like in1, in2, in3 can fall outside the valid range, I don’t know why the code behaves like this but I tried other solutions and none of them worked. Clamping just forces those indices back inside the grid so we can safely read ϵᵣ. The issue only appears for small refinements because for the rest of refinements the boundaries are within limits, you can reproduce this issue if you want by running for spacing_surface_refinement: [1e-5, 1e-5, 1e-5] in ivc_inactivelayer.yaml with the default get_ϵ_of_oktant functions (it’ll give a SegFault after approximately an hour since calculating the electric potential is slow for very small grids). I’m open for other solutions I just couldn’t find any other better given the logic of the code

And this is because, instead of expecting maximum 1 tick going "outside" of the grid, you might have more than 1 now?

@fhagemann
Copy link
Copy Markdown
Collaborator

I just want to make sure that we expect accessing ϵ_r out of bounds of not..
Sure, this might fix the fact that we will not get an error, but maybe we shouldn't exceed the bounds of ϵ_r at all in the first place 🤷‍♂️

Comment thread src/Simulation/Simulation.jl Outdated
# Keep φ-axis type but close it for interpolation
_get_closed_axis(old_grid.axes[i])
else
_refine_axis_surface( old_grid.axes[i], surface_intervals[i], min_spacing[i],)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now, there is no possibility to pass extra_before and extra_After to _refine_axis_surface when calling refine_surface!.
Is this intended?

@fhagemann
Copy link
Copy Markdown
Collaborator

Do you have an example (some config file + some simulation settings when running calculate_electric_potential! that would cause the calculation to fail when not including these "safe" clamps for ϵ_r.
I might want to have a short look at that myself to see what's going on :)

Comment thread src/Simulation/Simulation.jl Outdated
@fhagemann
Copy link
Copy Markdown
Collaborator

We tracked down the out-of-bounds error when accessing ϵ to the grid not having an even number of grid ticks.
I recall that either all, or just one axis needs to have an even number of ticks for the red-black division of the grid to work. Having an even number of ticks resolves this issue.

Original code:
Screenshot from 2026-04-15 07-35-00

Forcing the z-axis to have an even number of ticks:
Screenshot from 2026-04-15 07-35-56

I can look into implementing a fix for this later today or this week.

Comment thread src/PotentialCalculation/Refinement.jl Outdated
Comment thread src/PotentialCalculation/SuccessiveOverRelaxation/CPU_innerloop.jl Outdated
Comment thread src/PotentialCalculation/Refinement.jl
Comment thread src/PotentialCalculation/Refinement.jl
@fhagemann fhagemann force-pushed the RefineSurface_Grid branch from f43290c to e37bdee Compare April 16, 2026 18:18
@fhagemann fhagemann force-pushed the RefineSurface_Grid branch from e37bdee to d135b0d Compare April 16, 2026 18:20
@fhagemann fhagemann force-pushed the RefineSurface_Grid branch from cf7e851 to 9e6b5c0 Compare April 16, 2026 18:36
Comment thread src/Simulation/Simulation.jl
@claudiaalvgar
Copy link
Copy Markdown
Collaborator Author

The new changes look good to me and the code behaves as intended so I think this is good to go on my side

@fhagemann
Copy link
Copy Markdown
Collaborator

Okay, I will deal with min/max distance and also look into performance, but will merge this afterwards!

@fhagemann fhagemann force-pushed the RefineSurface_Grid branch from 07450a7 to d0b3843 Compare April 17, 2026 16:20
Comment thread src/PotentialCalculation/Refinement.jl Outdated
Comment on lines -196 to -197
# Create flags for all intervals to refine
refine_flags = copy(surface_intervals)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

surface_intervals are not used anymore after calling this functions, so we should be fine modifying this array instead of allocating more memory.

Comment thread src/PotentialCalculation/Refinement.jl Outdated
Comment on lines +207 to +208
# Merge consecutive intervals to refine
merged = Vector{UnitRange{Int}}()
i = 1
while i <= n_int
if refine_flags[i]
j = i
while j < n_int && refine_flags[j+1]
j += 1
end
push!(merged, i:j)
i = j + 1
else
i += 1
end
end

# Compute number of points to add per interval
ns = zeros(Int, n_int)
tmp = (zero(T), 1)
for r in merged
for i in r
for i in eachindex(refine_flags)
if refine_flags[i]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of creating a merged array of unit ranges, we can merge those two for-loops into one and just perform interval refinement for the i that have refine_flags set.

Comment thread src/PotentialCalculation/Refinement.jl Outdated
Comment on lines +245 to +233
sub_widths = [(old_ticks[i+1] - old_ticks[i]) / (ns[i]+1) for i in 1:n_int]
ticks = Vector{T}(undef, length(old_ticks) + sum(ns))
i_tick = 1
for j in 1:n_int
ticks[i_tick] = old_ticks[j]
sub_width = (old_ticks[j+1] - old_ticks[j]) / (ns[j]+1)
for k in 1:ns[j]
i_tick += 1
ticks[i_tick] = old_ticks[j] + k*sub_widths[j]
ticks[i_tick] = old_ticks[j] + k*sub_width
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to allocate memory for sub_widths if the values can just be calculated "on-the-fly" in the for-loop

@fhagemann fhagemann force-pushed the RefineSurface_Grid branch from 5e965f0 to c9bf933 Compare April 17, 2026 22:08
Comment thread src/Simulation/Simulation.jl Outdated
Comment on lines +848 to +860
max_spacing = collect(max_spacing)

for d in 1:3
if length(old_grid.axes[d].ticks) != 1
axis_width = rightendpoint(old_grid.axes[d].interval) - leftendpoint(old_grid.axes[d].interval)
min_allowed = axis_width * T(5e-5)
max_allowed = axis_width * T(1e-1)

val = T(max_spacing[d])

if val < min_allowed || val > max_allowed
@warn "max_spacing[$d] = $val m is out of bounds (min_allowed = $min_allowed, max_allowed = $max_allowed)."
max_spacing[d] = clamp(val, min_allowed, max_allowed)
end

max_spacing = ntuple(d -> if length(old_grid.axes[d].ticks) != 1
axis_width = width(old_grid.axes[d].interval)
min_allowed = axis_width * T(5e-5)
max_allowed = axis_width * T(1e-1)
val = T(max_spacing[d])
if val < min_allowed || val > max_allowed
@warn "max_spacing[$d] = $val m is out of bounds (min_allowed = $min_allowed, max_allowed = $max_allowed)."
end
end

max_spacing = ntuple(d -> length(old_grid.axes[d].ticks) == 1 ? T(0.0) : max_spacing[d], 3)
clamp(val, min_allowed, max_allowed)
else
zero(T)
end, 3)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This avoids allocations by preventing creating a Vector for max_spacing, but letting it remain it a Tuple throughout the function.

@claudiaalvgar
Copy link
Copy Markdown
Collaborator Author

Final comment: I tested the code with new implementations on CPU and GPU and the grid results are as expected, I leave them here for future reference:

  • max_spacing = 1e-4 m IVCIlayer on GPU: Elapsed time: 47.767041562 seconds GRID: Grid{Float64, 3, Cylindrical}(0.0 .. 0.04 - length = 348, 0.0 .. 0.0 - length = 1, -0.01 .. 0.09 - length = 902)
ICVIlayer_1e-4_GPU_DeleteClampChanges * max_spacing = 2.5e-5 m IVCIlayer on GPU: Elapsed time: 80.683083286 seconds GRID: Grid{Float64, 3, Cylindrical}(0.0 .. 0.04 - length = 1292, 0.0 .. 0.0 - length = 1, -0.01 .. 0.09 - length = 3438) ICVIlayer_2 5e-5_GPU_DeleteClampChanges * max_spacing = 1e-5 m IVCIlayer on GPU: Elapsed time: 275.473297317 seconds GRID: Grid{Float64, 3, Cylindrical}(0.0 .. 0.04 - length = 3152, 0.0 .. 0.0 - length = 1, -0.01 .. 0.09 - length = 8520) ICVIlayer_1e-5_GPU_DeleteClampChanges
  • max_spacing = 1e-4 m IVCIlayer on CPU Threads= 45: Elapsed time: 28.721281699 seconds GRID: Grid{Float64, 3, Cylindrical}(0.0 .. 0.04 - length = 348, 0.0 .. 0.0 - length = 1, -0.01 .. 0.09 - length = 902)

  • max_spacing = 2.5e-5 m IVCIlayer on CPU Threads= 45: Elapsed time: 190.124053047 seconds GRID: Grid{Float64, 3, Cylindrical}(0.0 .. 0.04 - length = 1292, 0.0 .. 0.0 - length = 1, -0.01 .. 0.09 - length = 3438)

  • max_spacing = 1e-5 m IVCIlayer on CPU Threads= 45: Elapsed time: 1213.737185377 seconds GRID: Grid{Float64, 3, Cylindrical}(0.0 .. 0.04 - length = 3152, 0.0 .. 0.0 - length = 1, -0.01 .. 0.09 - length = 8520)

@fhagemann fhagemann merged commit 471a446 into JuliaPhysics:main Apr 21, 2026
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

convenience Improve user-friendliness enhancement Improvement of existing features notation Notation and Interface

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants