--- base_model: [] library_name: transformers tags: - mergekit - merge ---
MidnightMiqu
# Midnight-Miqu-70B-v1.5 - EXL2 2.75bpw This is a 2.75bpw EXL2 quant of [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5) Details about the model and the merge info can be found at the above mode page. I have not extensively tested this quant/model other than ensuring I could load it and chat with it. ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Define variables MODEL_DIR="models/Midnight-Miqu-70B-v1.5" OUTPUT_DIR="exl2_midnightv15-70b" MEASUREMENT_FILE="measurements/midnight70b-v15.json" BIT_PRECISIONS=(6.0 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/Midnight-Miqu-70B-v1.5_exl2_${BIT_PRECISION}bpw" if [ -d "$CONVERTED_FOLDER" ]; then echo "Skipping $BIT_PRECISION as $CONVERTED_FOLDER already exists." continue fi rm -r "$OUTPUT_DIR" mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" python convert.py -i "$MODEL_DIR" -o "$OUTPUT_DIR" -nr -m "$MEASUREMENT_FILE" -b "$BIT_PRECISION" -cf "$CONVERTED_FOLDER" done```