Skip to content

HiDream I1

HiDream is a new absolutely massive image generative foundation model with 17B parameters

Image

HiDream-I1 family consists of 3 variations: - Full) - Dev - Fast

Difference between variants is recommended number of steps: - full=50, dev=28, fast=16

HiDream-I1 is compatibile with:
- FlowMatching Samplers
- Remote VAE feature
- TAE Live-preview feature

Important

Due to size (over 25B params in 58GB), offloading and on-the-fly quantization are pretty much a necessity
Running HiDream on <16GB GPU is possible with BnB-NF4 or Quanto-Int4 quantization and default Balanced offload settings
Note that you must pick quantization methods that are compatible with your GPU and platform

Note

Set offloading Set appropriate offloading setting before loading the model to avoid out-of-memory errors
For more information see Offloading Wiki

Note

Choose quantization Check compatibility of different quantizations with your platform and GPU!
For more information see Quantization Wiki

Tip

Use reference models Use of reference models is recommended over manually downloaded models!
Simply select it from Networks -> Models -> Reference

and model will be auto-downloaded on first use

Important

Llama-3.1-8b-instruct LLM model used by HiDream is a gated model!
You need to request access from the authors to use it
See Gated Wiki for more information

Text Encoders

HiDream utilizes 4 text-encoders: clip-l, clip-g, t5-1.1-xxl, llama-3.1-8b-instruct for total of 8.3B parameters

Custom llama model can be set in: Settings -> Model options -> HiDream

Note

SD.Next implementation differens from reference as it bumps up default max token length from 128 to 256
Max token length can be further overriden using env variable HIDREAM_MAX_SEQUENCE_LENGTH