# SD.Next: All-in-one WebUI for AI generative image and video creation


[](https://discord.gg/VjvR2tabEX)
[](https://github.com/sponsors/vladmandic)
[Docs](https://vladmandic.github.io/sdnext-docs/) | [Wiki](https://github.com/vladmandic/sdnext/wiki) | [Discord](https://discord.gg/VjvR2tabEX) | [Changelog](/sdnext/CHANGELOG.html)
</br>
Table of contents
SD.Next Features
All individual features are not listed here, instead check ChangeLog for full list of changes
Fully localized:
▹ English | Chinese | Russian | Spanish | German | French | Italian | Portuguese | Japanese | Korean
Multiple UIs!
▹ Standard | Modern
Multiple diffusion models !
Built-in Control for Text, Image, Batch and Video processing!
Multiplatform!
▹ Windows | Linux | MacOS | nVidia CUDA | AMD ROCm | Intel Arc / IPEX XPU | DirectML | OpenVINO | ONNX+Olive | ZLUDA
Platform specific autodetection and tuning performed on install
Optimized processing with latest torch
developments with built-in support for model compile and quantize
Compile backends: Triton | StableFast | DeepCache | OneDiff | TeaCache | etc.
Quantization methods: SDNQ | BitsAndBytes | Optimum-Quanto | TorchAO
Interrogate/Captioning with 150+ OpenCLiP models and 20+ built-in VLMs
Built-in queue management
Built in installer with automatic updates and dependency management
Mobile compatible
Main interface using StandardUI :
Main interface using ModernUI :
For screenshots and informations on other available themes, see Themes
Model support
SD.Next supports broad range of models: supported models and model specs
nVidia GPUs using CUDA libraries on both Windows and Linux
AMD GPUs using ROCm libraries on Linux
Support will be extended to Windows once AMD releases ROCm for Windows
Intel Arc GPUs using OneAPI with IPEX XPU libraries on both Windows and Linux
Any GPU compatible with DirectX on Windows using DirectML libraries
This includes support for AMD GPUs that are not supported by native ROCm libraries
Any GPU or device compatible with OpenVINO libraries on both Windows and Linux
Apple M1/M2 on OSX using built-in support in Torch with MPS optimizations
ONNX/Olive
AMD GPUs on Windows using ZLUDA libraries
Plus Docker container receipes for: CUDA, ROCm, Intel IPEX and OpenVINO
Getting started
[!TIP]
And for platform specific information, check out
WSL | Intel Arc | DirectML | OpenVINO | ONNX & Olive | ZLUDA | AMD ROCm | MacOS | nVidia | Docker
[!WARNING]
If you run into issues, check out troubleshooting and debugging guides
Contributing
Please see Contributing for details on how to contribute to this project
And for any question, reach out on Discord or open an issue or discussion
Credits
Evolution
</a>
Docs
If you’re unsure how to use a feature, best place to start is Docs and if its not there,
check ChangeLog for when feature was first introduced as it will always have a short note on how to use it