SDNQ Quantization
SD.Next Quantization provides full cross-platform quantization to reduce memory usage and increase performance for any device.
Usage
- Go into
Settings -> Quantization Settings
- Enable the desired Quantization options under the
SDNQ
menu
Model, TE and LLM are the main targets for most use cases - If model is already loaded, reload the model
Once quantization options are set, they will be applied to any model loaded after that
Features
- SDNQ is fully cross-platform, supports all GPUs and CPUs and includes many quantization methods:
- 8-bit, 7-bit, 6-bit, 5-bit, 4-bit, 3-bit, 2-bit and 1-bit int and uint
- 8-bit e5, e4 and fnuz float
note:int8
is very close to the original 16 bit quality - Supports nearly all model types
- Supports compute optimizations using Triton via
torch.compile
- Supports Quantized MatMul with significant speedups on INT8 or FP8 supported GPUs
- Supports on the fly quantization during model load with little to no overhead (called as
pre
mode) - Supports quantization for the convolutional layers with UNet models
- Supports post load quantization for any model
- Supports on the fly usage of LoRa models
- Supports balanced offload
Benchmarks are available in the Quantization Wiki.
Recommended Options
Dequantize using torch.compile
Highly recommended for much better performance if Triton is availableUse Quantized MatMul
Recommended for much better performance if Triton is available on supported GPUs
Supported GPUs for quantized matmul are listed in the Use Quantized MatMul section.- Recommended quantization dtype is
INT8
for its fast speed and almost no loss in quality
You can useINT6
orINT5
with some quality loss and still benefit from Quantized MatMul
You can useUINT4
orUINT3
to save even more memory but Quantized MatMul will be ignored with them
float8_e4m3fn
is another option for fast speed and high quality butFP8
has slightly lower quality thanINT8
Triton
Triton enables the use of optimized kernels for much better performance.
Triton is not required for SDNQ but it is highly recommended for much better performance.
SDNQ will use Triton by default via torch.compile if Triton is available. You can override this with dequantize using torch.compile option.
Triton with Nvidia
- Linux
- Triton comes built-in on Linux, you can use the Triton optimizations out of the box.
- Windows
- Windows requires manual installation of Triton.
Installation steps are available in the Quantization Wiki
Triton with AMD
- Linux
- Triton comes built-in on Linux, you can use the Triton optimizations out of the box.
- Windows
- Windows requires manual installation of Triton and not guaranteed to work with Zluda.
Experimental installation steps are available in the ZLUDA Wiki
Triton with Intel
- Triton comes built-in with Intel on both Windows and Linux, you can use the Triton optimizations out of the box.
Windows might require additional installation of MSVC if it is not already installed and activated.
Installation steps are available in the PyTorch Inductor Windows wiki
Options
Quantization enabled
Used to decide which parts of the model will get quantized.
Recommended options are Model
and TE
.
Default is none.
Model
is used quantize the Diffusion Models.TE
is used to quantize the Text Encoders.LLM
is used to quantize the LLMs with Prompt Enhance.Control
is used to quantize ControlNets.VAE
is used to quantize the VAE. Using the VAE option is not recommended.
Note
VAE Upcast has to be set to false if you use the VAE option with FP16.
If you get black images with SDXL models, use the FP16 Fixed VAE.
Quantization mode
Used to decide when the quantization step will happen on model load.
Default is auto
.
Auto
mode will choosepre
orpost
automatically depending on the model.Pre
mode will quantize the model while the model is loading. Reduces system RAM usage.Post
mode will quantize the model after the model is loaded into system RAM.
Pre
mode is compatible with DiT and Video models like Flux but older UNet models like SDXL are only compatible with post
mode.
Quantization type
Used to decide the data type used to store the model weights.
Recommended types are int8
for 8 bit, int6
for 6 bit, float8_e4m3fn
for fp8 and uint4
for 4 bit.
Default is int8
.
INT8 quants have very similar quality to the full 16 bit precision while using 2 times less memory.
INT6 quants are the middle ground. Similar quality to to the full 16 bit precision while using 2.7 times less memory.
INT4 quants have lower quality and less performance but uses 3.6 times less memory.
FP8 quants have similar quality to INT6 but with the same memory usage as INT8.
Unsigned quants have the extra u
added to the start of their name while the symetric quants don't have any prefix.
Unsigned (asymetric) types: uint8
, uint7
, uint6
, uint5
, uint4
, uint3
, uint2
and uint1
Symetric types: int8
, int7
, int6
, int5
, int4
, int3
, int2
, float8_e4m3fn
, float8_e5m2
, float8_e4m3fnuz
and float8_e5m2fnuz
Unsigned quants uses unsigned integers, meaning they can't store negative values and will use another variable called zero point for this purpose.
Symetric quants can store negative and positive values meaning they don't have extra zero point value and they run faster than unsigned quants because of this.
int8
uses int8 and has -128 to 127 range.int7
uses eight int7 values packed into seven uint8 values and has -64 to 63 range.int6
uses four int6 values packed into three uint8 values and has -32 to 31 range.int5
uses eight int5 values packed into five uint8 values and has -16 to 15 range.int4
uses two int4 values packed into a single uint8 value and has -8 to 7 range.int3
uses eight int3 values packed into a three uint8 values and has -4 to 3 range.int2
uses four int2 values packed into a single uint8 value and has -2 to 1 range.uint8
uses uint8 and has 0 to 255 range.uint7
uses eight uint7 values packed into seven uint8 values and has 0 to 127 range.uint6
uses four uint6 values packed into three uint8 values and has 0 to 63 range.uint5
uses eight uint5 values packed into five uint8 values and has 0 to 31 range.uint4
uses two uint4 values packed into a single uint8 value and has 0 to 15 range.uint3
uses eight uint3 values packed into a three uint8 value and has 0 to 7 range.uint2
uses four uint2 values packed into a single uint8 value and has 0 to 3 range.uint1
uses boolean and has 0 to 1 range.float8_e4m3fn
uses float8_e4m3fn and has -448 to 448 range.float8_e5m2
uses float8_e5m2 and has -57344 to 57344 range.float8_e4m3fnuz
uses float8_e4m3fnuz and has -240 to 240 range.float8_e5m2fnuz
uses float8_e5m2fnuz and has -57344 to 57344 range.
Quantization type for Text Encoders
Same as Quantization type
but for the Text Encoders.
default
option will use the same type as Quantization type
.
Group size
Used to decide how many elements of a tensor will share the same quantization group.
Higher values have better performance but less quality.
Default is 0
, meaning it will decide the group size based on your quantization type setting.
Linear layers will use this formula to find the group size: 2 ** (2 + number_of_bits)
Convolutions will use this formula to find the group size: 2 ** (1 + number_of_bits)
Setting the group size to -1
will disable grouping.
Quantize convolutional layers
Enabling this option will quantize convolutional layers in UNet models too.
Has much better memory savings but lower quality.
Convolutions will use uint4
when using quants with less than 4 bits.
Disabled by default.
Dequantize using torch.compile
Uses Triton via torch.compile
on the dequantization step.
Has significantly higher performance.
This setting requires a full restart of the webui to apply.
Enabled by default if Triton is available.
Use Quantized MatMul
Enabling this option will use quantized INT8 or FP8 MatMul instead of BF16 / FP16.
Has significantly higher performance on GPUs with INT8 or FP8 support.
Disabled by default.
Supported GPUs
- Nvidia
Requires Turing (RTX 2000) or newer GPUs for INT8 matmul.
Requires Ada (RTX 4000) or newer GPUs for FP8 matmul.
- AMD
Requires ROCm, not supported with Zluda.
Requires RDNA3 (RX 7000) or newer GPUs for INT8 matmul.
Requires MI300X for FP8 matmul.
- RDNA3 supports INT8 matmul but runs at the same speed as FP16.
- Vega, RDNA and RDNA2 supports INT8 on hardware but not supported in PyTorch.
Those GPUs will require AMD to add INT8 support to PyTorch for them to work.
- RDNA4 (RX 9000) supports fast INT8 and FP8 matmul but software support isn't ready yet.
FP8 matmul will work with RX 9000 when AMD adds FP8 matmul support to PyTorch.
- Intel
Intel technically support FP8 and INT8 with Alchemist (Arc A770) or newer GPUs but software support isn't ready yet.
Intel GPUs will work with quantized matmul when Intel merges INT8 and FP8 matmul support to PyTorch 2.8 or 2.9.
See https://github.com/pytorch/pytorch/pull/157769 and https://github.com/pytorch/pytorch/pull/140972
Quantized INT8 MatMul is only compatible with symmetric int
quant types.
Quantized FP8 MatMul is only compatible with float8_e4m3fn
quant type on GPUs. CPUs and some GPUs can use the other FP8 types too.
Recommended quant type to use with this option is int8
for quality and INT8 matmul tends to be faster than FP8.
Recommended quant type for FP8 matmul is float8_e4m3fn
for quality and better hardware support.
Groups sizes will be disabled when Quantized MatMul is enabled.
Use Quantized MatMul with convolutional layers
Same as Use Quantized MatMul
but for the convolutional layers with UNets like SDXL.
Disabled by default.
Quantize using GPU
Enabling this option will use the GPU with the quantization calculations on model load.
Can be faster with weak CPUs but can also be slower because of GPU to CPU communication overhead.
Enabled by default.
When Model load device map
in the Models & Loading
settings is set to default
or cpu
this option will send a part of the model weights to the GPU and quantize it, then will send it back to the CPU right away.
If device map is set to gpu
, model weights will be loaded directly into GPU and the quantized model weights will be kept in the GPU until the quantization of the current model part is over.
If Model offload mode
is set to none
, quantized model weights will be sent to the GPU after quantization and will stay in the GPU.
If Model offload mode
is set to model
, quantized model weights will be sent to the GPU after quantization and will be sent back to the CPU after the quantization of the current model part is over.
Dequantize using full precision
Enabling this option will use FP32
on the dequantization step.
Has higher quality outputs but lower performance.
Disabled by default.