Docker
SD.Next includes basic Dockerfiles for use with for use with following platforms:
- nVidia CUDA
- AMD ROCm
- Intel IPEX
- OpenVINO
Other system may require different configurations and base images, but principle remains the same
Goal of containerized SD.Next is to provide a fully stateless environment that can be easily deployed and scaled
It is recommended to build your own docker image to include any customizations or extensions you may require
Build process is very simple and fast, typically around ~1min for initial build (plus time required to download base image and any dependencies which can take a while depending on your internet connection) to just couple of seconds for any incremental builds using previously cached images
If you want to skip the build process, you can use Prebuilt images provided by community members which are offered on best-effort basis
Prerequisites
Important
If you already have functional Docker on your host, you can skip this section
See-also: #manual-install example
- Docker itself
https://docs.docker.com/get-started/get-docker/ - nVidia Container ToolKit to enable GPU support
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
Build Image
First build will also need to download the base image, which can take a while depending on your connection
If you make changes to Dockerfile
or update SD.Next, you will need to rebuild the image
Important
Build process should be done on a system where SD.Next was started at least once to download all required submodules before docker copy process
Tip
If you want to include any extensions in the docker image, install/clone them into /extensions
folder before building the image
CUDA
SD.Next docker template is based on official base image with python==3.11.11
, torch==2.6.0
and cuda==12.6
Base image pytorch/pytorch:2.6.0-cuda12.6-cudnn9-runtime
is 3.24GB
And full SD.Next resulting image is ~8.4GB and contains all required dependencies
SD_FOLDER=/home/sdnext # path to sdnext home folder
docker build \
--debug \
--tag sdnext/sdnext-cuda \
--progress=plain \
--file $SD_FOLDER/configs/Dockerfile.cuda \
$SD_FOLDER
ROCm
Base image rocm/dev-ubuntu-24.04
is 3.15GB
And full SD.Next resulting image is ~23.15GB and contains all required dependencies
SD_FOLDER=/home/sdnext # path to sdnext home folder
docker build \
--debug \
--tag sdnext/sdnext-rocm \
--progress=plain \
--file $SD_FOLDER/configs/Dockerfile.rocm \
$SD_FOLDER
IPEX
Base image https://hub.docker.com/_/ubuntu
is 1.1GB
And full SD.Next resulting image is ~9.1GB and contains all required dependencies
SD_FOLDER=/home/sdnext # path to sdnext home folder
docker build \
--debug \
--tag sdnext/sdnext-ipex \
--progress=plain \
--file $SD_FOLDER/configs/Dockerfile.ipex \
$SD_FOLDER
OpenVINO
Base image https://hub.docker.com/_/ubuntu
is 1.1GB
And full SD.Next resulting image is ~3.6GB
SD_FOLDER=/home/sdnext # path to sdnext home folder
docker build \
--debug \
--tag sdnext/sdnext-openvino \
--progress=plain \
--file $SD_FOLDER/configs/Dockerfile.openvino \
$SD_FOLDER
Prebuilt
SD.Next community members have provided prebuilt docker images for various platforms:
- nVidia CUDA
tag: vladmandic/sdnext-cuda:latest
compressed size: 3.98GB
- AMD ROCm
tag: disty0/sdnext-rocm:latest
compressed size: 1.05GB
- Intel IPEX
tag: disty0/sdnext-ipex:latest
compressed size: 0.4GB
- OpenVINO
tag: disty0/sdnext-openvino:latest
compressed size: 0.4GB
Notes
Log Example
[+] Building 54.1s (12/12) FINISHED docker:default
=> [internal] load build definition from Dockerfile.cuda 0.0s
=> => transferring dockerfile: 2.24kB 0.0s
=> [internal] load metadata for docker.io/pytorch/pytorch:2.6.0-cuda12.6-cudnn9-runtime 0.2s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 366B 0.0s
=> [1/7] FROM docker.io/pytorch/pytorch:2.6.0-cuda12.6-cudnn9-runtime@sha256:xxx 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 279.77kB 0.1s
=> CACHED [2/7] RUN ["apt-get", "-y", "update"] 0.0s
=> CACHED [3/7] RUN ["apt-get", "-y", "install", "git", "build-essential", "google-perftools", "curl"] 0.0s
=> CACHED [4/7] RUN ["/usr/sbin/ldconfig"] 0.0s
=> CACHED [5/7] COPY . /app 0.0s
=> CACHED [6/7] WORKDIR /app 0.0s
=> [7/7] RUN ["python", "/app/launch.py", "--debug", "--uv", "--use-cuda", "--test", "--optional"] 51.0s
=> exporting to image 2.8s
=> => exporting layers 2.8s
=> => writing image sha256:xxx 0.0s
=> => naming to docker.io/sdnext/sdnext-cuda 0.0s
State
As mentioned, the goal of SD.Next docker deployment is fully stateless operations.
By default, SD.Next docker containers is stateless: any data stored inside the container is lost when the container stops.
All state items and outputs will be read from and written to /server/data
This includes:
- Configuration files: config.json
, ui-config.json
- Cache information: cache.json
, metadata.json
- Outputs of all generated images: outputs/
Persistence
If you plan to customize SD.Next deployment with additional extensions,
you may want to create and map docker volume to avoid constaint reinstalls on each startup.
Healthchecks
By default, SD.Next docker container does not include Docker healthchecks, but they can be enabled.
Simply remove comment from HEALTHCHECK
line in Dockerfile
and rebuild the image.
Run Local
Containers built using above commands are local and can be used directly on the host system
Note
This is an EXAMPLE run command, modify as needed for your environment!
- Republishes port from container to host directly
You may need to remap ports if you have multiple containers running on the same host
- Maps local server folder /server/data
to be used by the container as data root
This is where all state items and outputs will be read from and written to
- Maps local server folder /server/models
to be used by the container as model root
This is where models will be read from and written to
- Locations /mnt/data
and /mnt/models
are configured inside Dockerfile itself,
so either edit those values and rebuild container or make sure those are available
- If you're using network attached storage instead of local folders,
you can use those directly and skip mounting local folders
docker run \
--name sdnext-container \
--rm \
--gpus all \
--publish 7860:7860 \
--mount type=bind,source=/server/models,target=/mnt/models \
--mount type=bind,source=/server/data,target=/mnt/data \
--detach \
sdnext/sdnext-cuda
Warning
Parameter --gpus all
is required to expose nVidia CUDA GPU from parent host to the container
For other platforms, use refer to official documentation and use appropriate parameters
For example, AMD uses --device /dev/dri
and --device /dev/kfd
Tip
Param --detach
will run container in background
If you want troubleshoot startup and see logs in the console directly, remove --detach
parameter
Typical SDNext container will start in ~6sec and will be ready to accept connections on port 7860
Publish
If you want to share the containers with others or deploy it on some cloud compute platform,
you will need to publish the container to a container registry
There are many container registries with Docker Hub being the most popular and widely used
Alternatively, check out GitHub Packages, AWS ECR, Azure ACR, Google AR, Quay, etc
Example using Docker Hub
1. Create an account on Docker Hub
2. Create a peronal access token in your Docker Hub account
3. Login to Docker Hub from your terminal
docker login --username
--password
4. Tag your local container with your Docker Hub username and repository name
docker tag sdnext/sdnext-cuda/sdnext-cuda
5. Push your container to Docker Hub
docker push/sdnext-cuda
Your container is now available on Docker Hub and can be seen on
https://hub.docker.com/repository/docker/_username_/sdnext-cuda/!
Run Cloud
Once your container is published, you can run it on any cloud compute platform that supports Docker containers
For example RunPod or AWS ECS
Example using RunPod:
1. Login to RunPod
2. Create Pod
1. Select platform with desired GPU
2. Edit template:
> Container image: username/sdnext-cuda:latest
> Expose HTTP port: 7860
3. Deploy Pod
Wait for deployment to complete
4. Connect to Pod
RunPod will provide with a public hostname over which you can access your SD.Next instance
Extra
Additional docker commands that may be useful
Tip
Inspect image
docker image inspect sdnext/sdnext-cuda
Tip
Clean Up
docker image ls --all
docker image rm <id>
docker builder prune --force
Tip
List Containers
docker container ls --all
docker ps --all
Tip
View Log
> docker container logs --follow <id>
Tip
Stop Container
> docker container stop <id>
Tip
Test GPU
docker info
docker run --name cudatest --rm --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Tip
Test Torch
docker pull pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime
docker run --name pytorch --rm --gpus all -it pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime
Manual Install
Warning
URLs below are examples for Ubuntu 24 Check your Linux distribution and version and use appropriate packages instead
Docker
wget https://download.docker.com/linux/ubuntu/dists/oracular/pool/stable/amd64/containerd.io_1.7.25-1_amd64.deb
wget https://download.docker.com/linux/ubuntu/dists/oracular/pool/stable/amd64/docker-ce_27.5.1-1~ubuntu.24.10~oracular_amd64.deb
wget https://download.docker.com/linux/ubuntu/dists/oracular/pool/stable/amd64/docker-ce-cli_27.5.1-1~ubuntu.24.10~oracular_amd64.deb
wget https://download.docker.com/linux/ubuntu/dists/oracular/pool/stable/amd64/docker-buildx-plugin_0.20.0-1~ubuntu.24.10~oracular_amd64.deb
wget https://download.docker.com/linux/ubuntu/dists/oracular/pool/stable/amd64/docker-compose-plugin_2.32.4-1~ubuntu.24.10~oracular_amd64.deb
sudo dpkg -i *.deb
sudo groupadd docker
sudo usermod -aG docker $USER
systemctl status docker
systemctl status containerd
nVidia Container ToolKit
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update
sudo apt install nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark