ZLUDA Support
ZLUDA (CUDA Wrapper) for AMD GPUs in Windows
Warning
ZLUDA does not fully support PyTorch in its official build. So ZLUDA support is so tricky and unstable. Support is limited at this time. Please don't create issues regarding ZLUDA on GitHub. Feel free to reach out via the ZLUDA thread in the help channel on discord.
Installing ZLUDA for AMD GPUs in Windows.
Note
This guide assumes you have Git and Python installed, and are comfortable using the command prompt, navigating Windows Explorer, renaming files and folders, and working with zip files.
If you have an integrated AMD GPU (iGPU), you may need to disable it, or use the HIP_VISIBLE_DEVICES
environment variable.
Install Visual C++ Runtime
Note: Most everyone would have this anyway, since it comes with a lot of games, but there's no harm in trying to install it.
Grab the latest version of Visual C++ Runtime from https://aka.ms/vs/17/release/vc_redist.x64.exe (this is a direct download link) and then run it.
If you get the options to Repair or Uninstall, then you already have it installed and can click Close. Otherwise, install it.
Install ZLUDA
ZLUDA is now auto-installed, and automatically added to PATH, when starting webui.bat with --use-zluda
.
Install HIP SDK
Install HIP SDK 5.7.1 from https://www.amd.com/en/developer/resources/rocm-hub/hip-sdk.html
So long as your regular AMD GPU driver is up to date, you don't need to install the PRO driver HIP SDK suggests.
Note: SD.Next supports HIP SDK 6.1.x, but the stability and functionality are not validated yet.
Replace HIP SDK library files for unsupported GPU architectures
Go to https://rocm.docs.amd.com/projects/install-on-windows/en/develop/reference/system-requirements.html and find your GPU model.
If your GPU model has a ✅ in both columns then skip to Install SD.Next.
If your GPU model has an ❌ in the HIP SDK column, or if your GPU isn't listed, follow the instructions below;
- Open Windows Explorer and copy and paste
C:\Program Files\AMD\ROCm\5.7\bin\rocblas
into the location bar.
(Assuming you've installed the HIP SDK in the default location and Windows is located on C:). - Make a copy of the
library
folder, for backup purposes. - Download one of the following files, and unzip them in the original library folder, overwriting any files there.
Note: Thanks to FremontDango, these alternate libraries for gfx1031 and gfx1032 GPUs are about 50% faster;
(Note: You may have to install 7-Zip to unzip the .7z files.) - If you have a 6700, 6700xt, or 6750xt (gfx1031) GPU, download Optimised_ROCmLibs_gfx1031.7z.
- If you have a 6600, 6600xt, or 6650xt (gfx1032) GPU, download Optimised_ROCmLibs_gfx1032.7z.
- For all other unsupported GPUs, download ROCmLibs.7z.
- Open the zip file.
- Drag and drop the
library
folder from zip file into%HIP_PATH%bin\rocblas
(The folder you opened in step 1). - Reboot PC
If your GPU model not in the HIP SDK column or not available in the above list, follow the instructions in ROCm Support guide to build your own RocblasLibs.
(Note: Building your own libraries is not for the faint of heart.)
Install SD.Next
Using Windows Explorer, navigate to a place you'd like to install SD.Next. This should be a folder which your user account has read/write/execute access to. Installing SD.Next in a directory which requires admin permissions may cause it to not launch properly.
Note: Refrain from installing SD.Next into the Program Files, Users, or Windows folders, this includes the OneDrive folder or on the Desktop, or into a folder that begins with a period; (eg: .sdnext
).
The best place would be on an SSD for model loading.
In the Location Bar, type cmd
, then hit [Enter]. This will open a Command Prompt window at that location.
Copy and paste the following commands into the Command Prompt window, one at a time;
git clone https://github.com/vladmandic/automatic
then
cd automatic
then
webui.bat --use-zluda --debug --autolaunch
Note: ZLUDA functions best in Diffusers Backend, where certain Diffusers-only options are available.
Compilation, Settings, and First Generation
After the UI starts, head on over to the System Tab (Standard UI) or the Settings Tab (Modern UI), then the Compute Settings category.
Set "Attention optimization method" to "Dynamic Attention BMM", then click Apply settings.
Now, try to generate something.
This should take a fair while to compile (10-15mins, or even longer; some reports state over an hour), but this compilation should only need to be done once.
Note: The text Compilation is in progress. Please wait...
will repeatedly appear, just be patient. Eventually your image will start generating.
Subsequent generations will be significantly quicker.
Upgrading ZLUDA
If you have problem with ZLUDA after updating SD.Next, upgrading ZLUDA may help.
- Remove
.zluda
folder. - Launch WebUI. The installer will download and install newer ZLUDA.
※ You may have to wait for a while to compile as the first generation.
PyTorch 2.4/2.5 (experimental)
The major blocker that prevents us to upgrade PyTorch is hipBLASLt.
It haven't been released on Windows yet.
However, there're unofficial builds available.
This section describes how to get the latest PyTorch rather than default (2.3.1).
- Install HIP SDK 6.2. If you already have older HIP SDK, uninstall it before installing 6.2.
- Remove
.zluda
folder if exists.
If you have setZLUDA
environment variable, download the latest nightly ZLUDA from here.
If you built ZLUDA yourself, pull latest commits of ZLUDA and rebuild with--nightly
. - Uninstall PyTorch if installed.
.\venv\Scripts\activate pip uninstall torch torchvision -y
- Download and install unofficial hipBLASLt build.
gfx1100, gfx1101, gfx1102, gfx1103, or gfx1150
gfx1030 (TBA) - Launch WebUI with environment variable
ZLUDA_NIGHTLY=1
.
By default, PyTorch 2.4.1 will be installed.
If you need PyTorch 2.5.1, manually install via the commands below.
./venv/Scripts/activate
pip install torch==2.5.1 torchvision --index-url https://download.pytorch.org/whl/cu118
Comparison (DirectML)
DirectML | ZLUDA | |
---|---|---|
Speed | Slower | Faster |
VRAM Usage | More | Less |
VRAM GC | ❌ | ✅ |
Traning | * | ✅ |
Flash Attention | ❌ | ❌ |
FFT | ❓ | ✅ |
FFTW | ❓ | ❌ |
DNN | ❓ | ❌ |
RTC | ❓ | ⚠️ |
Source Code | Closed | Opened |
Python | <=3.12 | Same as CUDA |
❓: unknown
⚠️: partially supported
*: known as possible, but uses too much VRAM to train stable diffusion models/LoRAs/etc.
Compatibility
DTYPE | |
---|---|
FP64 | ✅ |
FP32 | ✅ |
FP16 | ✅ |
BF16 | ✅ |
LONG | ✅ |
INT8 | ✅ |
UINT8 | ✅* |
INT4 | ❓ |
FP8 | ⚠️ |
BF8 | ⚠️ |
*: Not tested.