Skip to content

Commit 4e5e95b

Browse files
committed
fix image path in readme.md
1 parent 520e93a commit 4e5e95b

1 file changed

Lines changed: 5 additions & 5 deletions

File tree

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -251,23 +251,23 @@ Using GPUs, torchquad scales particularly well with integration methods that off
251251

252252
<!-- TODO Update plot links -->
253253
### Convergence Analysis
254-
![](https://github.com/esa/torchquad/blob/benchmark-0.4.1/resources/torchquad_convergence.png?raw=true)
254+
![](https://github.com/esa/torchquad/blob/main/resources/torchquad_convergence.png?raw=true)
255255
*Convergence comparison across challenging test functions from 1D to 15D. GPU-accelerated torchquad methods demonstrate great performance, particularly for high-dimensional integration where scipy's nquad becomes computationally infeasible. Beyond 1D, torchquad significantly outperforms scipy in efficiency.*
256256

257257
### Runtime vs Error Efficiency
258-
![](https://github.com/esa/torchquad/blob/benchmark-0.4.1/resources/torchquad_runtime_vs_error.png?raw=true)
258+
![](https://github.com/esa/torchquad/blob/main/resources/torchquad_runtime_vs_error.png?raw=true)
259259
*Runtime-error trade-offs across dimensions. Lower-left positions indicate better performance. While scipy's traditional methods are competitive for simple 1D problems, torchquad's GPU acceleration provides orders of magnitude better performance for multi-dimensional integration, achieving both faster computation and lower errors.*
260260

261261
### Scaling Performance
262-
![](https://github.com/esa/torchquad/blob/benchmark-0.4.1/resources/torchquad_scaling_analysis.png?raw=true)
262+
![](https://github.com/esa/torchquad/blob/main/resources/torchquad_scaling_analysis.png?raw=true)
263263
*Scaling investigation across problem sizes and dimensions of the different methods in torchquad.*
264264

265265
### Vectorized Integration Speedup
266-
![](https://github.com/esa/torchquad/blob/benchmark-0.4.1/resources/torchquad_vectorized_speedup.png?raw=true)
266+
![](https://github.com/esa/torchquad/blob/main/resources/torchquad_vectorized_speedup.png?raw=true)
267267
*Strong performance gains when evaluating multiple integrands simultaneously. The vectorized approach shows exponential speedup (up to 200x) compared to sequential evaluation, making torchquad ideal for parameter sweeps, uncertainty quantification, and machine learning applications requiring batch integration.*
268268

269269
### Framework Comparison
270-
![](https://github.com/esa/torchquad/blob/benchmark-0.4.1/resources/torchquad_framework_comparison.png?raw=true)
270+
![](https://github.com/esa/torchquad/blob/main/resources/torchquad_framework_comparison.png?raw=true)
271271
*Cross-framework performance comparison for 1D integration using Monte Carlo and Simpson methods. Demonstrates torchquad's consistent API across PyTorch, TensorFlow, JAX, and NumPy backends, with GPU acceleration providing significant performance advantages for large number of function evaluations. All frameworks achieve similar accuracy while showcasing the computational benefits of GPU acceleration for parallel integration methods.*
272272

273273
### Running Benchmarks

0 commit comments

Comments
 (0)