Skip to content

DirectML

SD.Next includes support for PyTorch-DirectML.

Important

DirectML support is marked End-of-Life (EOL) and will be removed in a future release. torch-directml has not received updates in over one year and is currently superseded by ROCm or ZLUDA.

How to

Add --use-directml to the command-line arguments.

For details, go to Installation.

Performance

Performance is significantly lower than ZLUDA and ROCm.

If your card is relatively new and you prefer Windows, use ZLUDA.

If you are comfortable with Linux, use ROCm.

FAQ

DirectML does not collect garbage memory

PyTorch-DirectML does not access graphics memory by indexing. Because PyTorch-DirectML's tensor implementation extends OpaqueTensorImpl, we cannot access the actual storage of a tensor.

An error occurs with no error message

If you met RuntimeError with no error message (or empty), please report us via GitHub issue or Discord. (please check whether there's a duplicated issue) If you get RuntimeError with no error message, report it on GitHub or Discord after checking for duplicates.

It does not work properly with FP16

If it works with FP32, please report us via GitHub issue or Discord. (please check whether there's a duplicated issue) If it works with FP32 but not FP16, report it on GitHub or Discord after checking for duplicates.

The terminal is suddenly frozen during generation

Please report us via GitHub issue or Discord. (please check whether there's a duplicated issue) Please report it on GitHub or Discord after checking for duplicates.

Olive (experimental support)

Refer to ONNX Runtime