You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: wiki/Installation-Instructions.md
+28Lines changed: 28 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,6 +24,34 @@ docker run -it ghcr.io/mesh-adaptation/firedrake-parmmg:latest
24
24
25
25
For more information on how to run docker containers, see the [official documentation](https://docs.docker.com/engine/containers/run/). For example, since all data inside a container is only accessible from inside the container, it is useful to create [filesystem mounts](https://docs.docker.com/engine/containers/run/#filesystem-mounts) to be able to access data from outside the container.
26
26
27
+
28
+
### Using GPUs inside a container
29
+
30
+
Passing the `--gpus` flag to `docker run` allows us to use GPUs inside containers (see [Docker documentation](https://docs.docker.com/reference/cli/docker/container/run/#gpus) for details). For example, to use all available GPUs, run the following:
31
+
```
32
+
docker run -it --gpus all ghcr.io/mesh-adaptation/firedrake-parmmg:latest
33
+
```
34
+
35
+
Once inside the container, we should first check that the GPUs have been detected. For NVIDIA GPUs we may do so by running `nvidia-smi` from the command line. If successful, information about your NVIDIA GPUs should be displayed, similarly to this:
Now you may proceed with installing GPU-supported software as normal. For example, to [install PyTorch](https://pytorch.org/get-started/locally/) with CUDA version 12.6 (as reported by `nvidia-smi` above), we may run:
0 commit comments