--- base_model: black-forest-labs/FLUX.1-schnell license: apache-2.0 model_creator: black-forest-labs model_name: FLUX.1-schnell quantized_by: Second State Inc. language: - en tags: - text-to-image - image-generation - flux ---

# FLUX.1-schnell-GGUF ## Original Model [black-forest-labs/FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) ## Run with `sd-api-server` Go to the [sd-api-server](https://github.com/LlamaEdge/sd-api-server/blob/main/README.md) repository for more information. ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [ae-f16.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/ae-f16.gguf) | f16 | 16 | 168 MB | | | [clip_l-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/clip_l-Q8_0.gguf) | Q8_0 | 8 | 131 MB | | | [clip_l-f16.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/clip_l-f16.gguf) | f16 | 16 | 246 MB | | | [flux1-schnell-Q4_0.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/flux1-schnell-Q4_0.gguf) | Q4_0 | 4 | 6.69 GB | | | [flux1-schnell-Q4_1.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/flux1-schnell-Q4_0.gguf) | Q4_1 | 4 | 6.69 GB | | | [flux1-schnell-Q5_0.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/flux1-schnell-Q4_0.gguf) | Q5_0 | 5 | 6.69 GB | | | [flux1-schnell-Q5_1.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/flux1-schnell-Q4_0.gguf) | Q5_1 | 5 | 6.69 GB | | | [flux1-schnell-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/flux1-schnell-Q4_0.gguf) | Q8_0 | 8 | 6.69 GB | | | [flux1-schnell-f16.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/flux1-schnell-Q4_0.gguf) | f16 | 16 | 6.69 GB | | | [t5xxl-Q2_K.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/t5xxl-Q2_K.gguf) | Q2_K | 2 | 1.61 GB | | | [t5xxl-Q3_K.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/t5xxl-Q3_K.gguf) | Q3_K | 3 | 2.10 GB | | | [t5xxl-Q4_0.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/t5xxl-Q4_0.gguf) | Q4_0 | 4 | 2.75 GB | | | [t5xxl-Q4_K.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/t5xxl-Q4_K.gguf) | Q4_K | 4 | 2.75 GB | | | [t5xxl-Q5_0.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/t5xxl-Q5_0.gguf) | Q5_0 | 5 | 3.36 GB | | | [t5xxl-Q5_1.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/t5xxl-Q5_1.gguf) | Q5_1 | 5 | 3.67 GB | | | [t5xxl-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/t5xxl-Q8_0.gguf) | Q8_0 | 8 | 5.20 GB | | | [t5xxl-f16.gguf](https://huggingface.co/second-state/FLUX.1-schnell-GGUF/blob/main/t5xxl-f16.gguf) | f16 | 16 | 9.79 GB | | **Quantized with stable-diffusion.cpp `master-64d231f`.**