# SD.Next: All-in-one WebUI
SD.Next is a powerful, open-source WebUI app for AI image and video generation, built on Stable Diffusion and supporting dozens of advanced models. Create, caption, and process images and videos with a modern, cross-platform interface—perfect for artists, researchers, and AI enthusiasts.





[](https://discord.gg/VjvR2tabEX)
[](https://deepwiki.com/vladmandic/sdnext)
[](https://github.com/sponsors/vladmandic)
[Docs](https://vladmandic.github.io/sdnext-docs/) | [Wiki](https://github.com/vladmandic/sdnext/wiki) | [Discord](https://discord.gg/VjvR2tabEX) | [Changelog](CHANGELOG.md)
SD.Next is feature-rich with a focus on performance, flexibility, and user experience. Key features include:
- [Multi-platform](Notes.md#platform-support!
- Many diffusion models!
- Fully localized to ~15 languages and with support for many UI themes!
- Desktop and Mobile support!
- Platform specific auto-detection and tuning performed on install
- Built in installer with automatic updates and dependency management
Unique features
SD.Next includes many features not found in other WebUIs, such as:
- SDNQ: State-of-the-Art quantization engine
Use pre-quantized or run with quantizaion on-the-fly for up to 4x VRAM reduction with no or minimal quality and performance impact
- Balanced Offload: Dynamically balance CPU and GPU memory to run larger models on limited hardware
- Captioning with 150+ OpenCLiP models, Tagger with WaifuDiffusion and DeepDanbooru models, and 25+ built-in VLMs
- Image Processing with full image correction color-grading suite of tools
If you're unsure how to use a feature, best place to start is Docs and if its not there,
check ChangeLog for when feature was first introduced as it will always have a short note on how to use it