Prompt Enhance
Note
Different model types have different preferences on how to prompt them. For details, see Prompting model specific tips.
SD.Next includes built-in prompt enhancer that uses LLM to enhance your prompts:
- Can be used for manual or automatic prompt enhancement.
Automatic enhancement runs during normal generation without user intervention. - Built-in presets for:
Gemma-3, Qwen-2.5, Phi-4, Llama-3.2, SmolLM2, Dolphin-3 - Supports custom system prompts.
- Supports custom models.
- Load any models hosted on huggingface
- Supports models in
huggingfaceformat. - Supports models in
ggufformat. - Models are auto-downloaded on first use.
- Supports quantization and offloading.
- Advanced options:
max tokens,sampling,temperature,repetition penalty.
Warning
If SD.Next detects censored output, it logs a warning and returns the original prompt.
[!NOTE]
Any model hosted on Hugging Face in original format should work
as long as it implements standard transformers.AutoModelForCausalLM interface.
[!NOTE]
Not all model architectures are supported for gguf format.
Typically, gguf support is added slightly later than transformers support.
[!TIP]
Debug logging can be enabled using SD_LLM_DEBUG=true environment variable.
Custom models
Can be used to define any model that is not included in the predefined list.
Example: standard huggingface model
- Model repo:
nidum/Nidum-Gemma-3-4B-it-Uncensored
Example: gguf model hosted on huggingface
- Model repo:
meta-llama/Llama-3.2-1B-Instruct
Link to the original model repo on Hugging Face, required so SD.Next can download components not present in thegguffile, such as the tokenizer. - Model GGUF:
mradermacher/Llama-3.2-1B-Instruct-Uncensored-i1-GGUF
Link to the repo on Hugging Face that hosts thegguffile(s). - Model type:
llama
Model type, required so SD.Next knows how to load the model. - Model name:
Llama-3.2-1B-Instruct-Uncensored.i1-Q4_0.gguf
Name of thegguffile inside the GGUF repo.
Supported GGUF model types: llama, mistral, qwen2, qwen2moe, falcon, tokenizer, phi3, bloom, t5, stablelm, gpt2, starcoder2, mamba, nemotron, gemma2
Supported Transformer model types are a superset of GGUF model types and include newer types such as gemma3.
If a model type is unsupported, SD.Next prints the currently supported types in the log file.