Skip to content

HiDream I1

HiDream is a new absolutely massive image generative foundation model with 17B parameters

Image

HiDream-I1 family consists of 3 variations:
- Full
- Dev
- Fast

Difference between variants is recommended number of steps:
- full=50, dev=28, fast=16

HiDream-I1 is compatibile with:
- FlowMatching Samplers
- Remote VAE feature
- TAE Live-preview feature

Important

Due to size (over 25B params in 58GB), offloading and on-the-fly quantization are pretty much a necessity
Running HiDream on <16GB GPU is possible with BnB-NF4 or Quanto-Int4 quantization and default Balanced offload settings
Note that you must pick quantization methods that are compatible with your GPU and platform

[!NOTE] Set appropriate offloading setting before loading the model to avoid out-of-memory errors
For more information see Offloading Wiki

[!NOTE] Check compatibility of different quantizations with your platform and GPU!
For more information see Quantization Wiki

[!IMPORTANT] Use reference models
Simply select it from Networks -> Models -> Reference
and model will be auto-downloaded on first use

Location of downloaded model is: - hugginface folder is used for individual components: transformers, t5 text-encoder and llama llm - diffusers folder is used for the main model Exact location of both folders can be found in Settings -> System Paths

Warning

Manually downloaded models in either safetensors or gguf formats are currently not supported

[!IMPORTANT] Llama-3.1-8b-instruct LLM model used by HiDream is a gated model!
You need to request access from the authors to use it
See Gated Wiki for more information

Text Encoders

HiDream utilizes 4 text-encoders: clip-l, clip-g, t5-1.1-xxl, llama-3.1-8b-instruct for total of 8.3B parameters

Custom llama model can be set in: Settings -> Model options -> HiDream

Note

SD.Next implementation differens from reference as it bumps up default max token length from 128 to 256
Max token length can be further overriden using env variable HIDREAM_MAX_SEQUENCE_LENGTH