File size: 3,781 Bytes
d5558de
 
cdec054
 
 
 
 
 
 
 
 
d5558de
cdec054
 
 
 
 
 
143721e
cdec054
addb691
e32f864
addb691
e32f864
addb691
e32f864
addb691
e32f864
54e0c6d
 
 
addb691
 
 
 
cdec054
 
 
 
 
 
addb691
cdec054
 
 
 
addb691
cdec054
 
 
8ee0783
 
2a776ed
8ee0783
 
 
addb691
54e0c6d
cdec054
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
license: other
language:
- en
pipeline_tag: text2text-generation
tags:
- alpaca
- llama
- chat
- gpt4
inference: false
---
# GPT4 Alpaca LoRA 30B - 4bit GGML

This is a 4-bit GGML version of the [Chansung GPT4 Alpaca 30B LoRA model](https://huggingface.co/chansung/gpt4-alpaca-lora-30b).

It was created by merging the LoRA provided in the above repo with the original Llama 30B model, producing unquantised model [GPT4-Alpaca-LoRA-30B-HF](https://huggingface.co/TheBloke/gpt4-alpaca-lora-30b-HF)

The files in this repo were then quantized to 4bit and 5bit for use with [llama.cpp](https://github.com/ggerganov/llama.cpp).

## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!

llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508

I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.

For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.

## Provided files
| Name | Quant method | Bits | Size | RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
`gpt4-alpaca-lora-30B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | 4bit. |
`gpt4-alpaca-lora-30B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 22.4GB | 25GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
`gpt4-alpaca-lora-30B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | 5bit. Higher accuracy, higher resource usage, slower inference. |
`gpt4-alpaca-lora-30B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 24.4GB | 27GB | 5bit. Even higher accuracy and resource usage, and slower inference. |

## How to run in `llama.cpp`

I use the following command line; adjust for your tastes and needs:

```
./main -t 18 -m gpt4-alpaca-lora-30B.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Write a story about llamas
### Response:"
```
Change `-t 18` to the number of physical CPU cores you have. For example if your system has 6 cores/12 threads, use `-t 6`.

If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`

## How to run in `text-generation-webui`

Create a model directory that has  `ggml` (case sensitive) in its name. Then put the desired .bin file in that model directory.

Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).

Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.

# Original GPT4 Alpaca Lora model card

This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system.
- Training script: borrowed from the official [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) implementation
- Training script:
```shell
python finetune.py \
    --base_model='decapoda-research/llama-30b-hf' \
    --data_path='alpaca_data_gpt4.json' \
    --num_epochs=10 \
    --cutoff_len=512 \
    --group_by_length \
    --output_dir='./gpt4-alpaca-lora-30b' \
    --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
    --lora_r=16 \
    --batch_size=... \
    --micro_batch_size=...
```

You can find how the training went from W&B report [here](https://wandb.ai/chansung18/gpt4_alpaca_lora/runs/w3syd157?workspace=user-chansung18).