TheBloke commited on
Commit
e593777
1 Parent(s): a1535c4

Initial GPTQ model commit.

Browse files
README.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - QingyiSi/Alpaca-CoT
4
+ - teknium/GPT4-LLM-Cleaned
5
+ - teknium/GPTeacher-General-Instruct
6
+ - metaeval/ScienceQA_text_only
7
+ - hellaswag
8
+ - openai/summarize_from_feedback
9
+ - riddle_sense
10
+ - gsm8k
11
+ - OpenAssistant/oasst1
12
+ language:
13
+ - en
14
+ library_name: transformers
15
+ pipeline_tag: text-generation
16
+ inference: false
17
+ license: other
18
+ ---
19
+
20
+ <!-- header start -->
21
+ <div style="width: 100%;">
22
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
23
+ </div>
24
+ <div style="display: flex; justify-content: space-between; width: 100%;">
25
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
26
+ <p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
27
+ </div>
28
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
29
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
30
+ </div>
31
+ </div>
32
+ <!-- header end -->
33
+
34
+ # OpenAccess AI Collective's Hippogriff 30B Chat GPTQ
35
+
36
+ This is GPTQ format quantised 4bit models of [OpenAccess AI Collective's Hippogriff 30B Chat](https://huggingface.co/openaccess-ai-collective/hippogriff-30b-chat).
37
+
38
+ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
39
+
40
+ ## Repositories available
41
+
42
+ * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/hippogriff-30B-Chat-GPTQ).
43
+ * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/hippogriff-30B-Chat-GGML).
44
+ * [float16 HF format model for GPU inference and further conversions](https://huggingface.co/openaccess-ai-collective/hippogriff-30b-chat).
45
+
46
+ ## How to easily download and use this model in text-generation-webui
47
+
48
+ Open the text-generation-webui UI as normal.
49
+
50
+ 1. Click the **Model tab**.
51
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/hippogriff-30B-Chat-GPTQ`.
52
+ 3. Click **Download**.
53
+ 4. Wait until it says it's finished downloading.
54
+ 5. Click the **Refresh** icon next to **Model** in the top left.
55
+ 6. In the **Model drop-down**: choose the model you just downloaded, `hippogriff-30B-Chat-GPTQ`.
56
+ 7. If you see an error in the bottom right, ignore it - it's temporary.
57
+ 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = None`, `model_type = Llama`
58
+ 9. Click **Save settings for this model** in the top right.
59
+ 10. Click **Reload the Model** in the top right.
60
+ 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
61
+
62
+ ## Provided files
63
+
64
+ **Compatible file - Hippogriff-30B-GPTQ-4bit.act-order.safetensors**
65
+
66
+ This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
67
+
68
+ It was created without group_size to minimise VRAM usage, and with `--act-order` to improve inference quality.
69
+
70
+ * `Hippogriff-30B-GPTQ-4bit.act-order.safetensors`
71
+ * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
72
+ * Works with AutoGPTQ.
73
+ * Works with text-generation-webui one-click-installers
74
+ * Parameters: Groupsize = None. Act-order.
75
+ * Command used to create the GPTQ:
76
+ ```
77
+ python llama.py ehartford_Hippogriff-30B-Chat c4 --wbits 4 --act-order --true-sequential --save_safetensors hippogriff-30b-GPTQ-4bit-128g.act-order.safetensors
78
+ ```
79
+
80
+ <!-- footer start -->
81
+ ## Discord
82
+
83
+ For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/UBgz4VXf)
84
+
85
+ ## Thanks, and how to contribute.
86
+
87
+ Thanks to the [chirper.ai](https://chirper.ai) team!
88
+
89
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
90
+
91
+ If you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on new AI projects.
92
+
93
+ Donaters will get priority support on any and all AI/LLM/model questions, plus other benefits.
94
+
95
+ * Patreon: https://patreon.com/TheBlokeAI
96
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
97
+
98
+ **Patreon special mentions**: Aemon Algiz; Talal Aujan; Jonathan Leane; Illia Dulskyi; Khalefa Al-Ahmad;
99
+ senxiiz. Thank you all, and to all my other generous patrons and donaters.
100
+ <!-- footer end -->
101
+
102
+ # Original model card: OpenAccess AI Collective's Hippogriff 30B Chat
103
+
104
+
105
+ # Hippogriff 30B Chat
106
+
107
+ [<img src="https://huggingface.co/openaccess-ai-collective/hippogriff-30b-chat/resolve/main/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
108
+
109
+
110
+ Hippogriff 30B Chat is an experiment that builds on Manticore with new datasets, while removing a few more instruction and chat datasets. It also includes a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using `###` in favor of
111
+ chat only style prompts using `USER:`,`ASSISTANT:` as well as [pygmalion/metharme prompting](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using `<|system|>, <|user|> and <|model|>` tokens.
112
+
113
+ Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/EqrvvehG) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
114
+
115
+ # Training Datasets
116
+
117
+ Hippogriff 30B Chat is a Llama 30B model fine-tuned on the following datasets
118
+
119
+ - OpenAssistant/oasst1 - cleaned dataset, similar to Guanaco
120
+ - synthetic jokes generation and explanation derived from reddit jokes dataset
121
+ - synthetic prose generation and rewriting self-chat
122
+ - Q&A based on provided context
123
+ - self instruct augmented logic_inference_oa
124
+ - de-duped pygmalion dataset, filtered down to RP data, cleaned, english only, 25%
125
+ - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented
126
+ - hellaswag, updated for detailed explanations w 30K+ rows
127
+ - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented
128
+ - [ewof/code-alpaca-instruct-unfiltered](https://huggingface.co/datasets/ewof/code-alpaca-instruct-unfiltered) synthetic self chat dataset derived from about 1000 rows
129
+ - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
130
+ - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
131
+ - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
132
+ - [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses, derived from the `train` split
133
+ - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
134
+ - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
135
+
136
+ Hippogriff differs from Manticore as it does not use the WizardLM, WizardVicuna, Alpaca, or ShareGPT datasets.
137
+
138
+ # Initial thoughts
139
+
140
+ Hippogriff follows instructions pretty well. It still struggles with anything that has to do with math. Prose is much better than manticore. Handles in-context QA much better.
141
+
142
+
143
+ # Shoutouts
144
+
145
+ Special thanks to Nanobit for helping with Axolotl, TheBloke for quantizing these models are more accessible to all, 0x000011b for the RP dataset, and the support from everyone in our AI Discord community.
146
+
147
+ # Demo
148
+
149
+ A Spaces demo is not provided for this release due to 30B models currently not fitting in VRAM.
150
+
151
+ ## Build
152
+
153
+ Hippogriff was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB
154
+ - 1.5 epochs taking approximately 12 hours. No further epochs will be released.
155
+
156
+ ## Bias, Risks, and Limitations
157
+
158
+ Hippogriff has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
159
+ Hippogriff was fine-tuned from the base model LlaMa 30B, please refer to its model card's Limitations Section for relevant information.
160
+
161
+ The core functionality of Llama is taking a string of text and predicting the next token. While language models are widely used for tasks other than this,
162
+ there are a lot of unknowns with this work. When prompting Hippogriff it is important to remember that the statistically most likely next token is often
163
+ not the token that produces the most "accurate" text. Never depend upon Hippogriff to produce factually accurate output.
164
+
165
+ Llama was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case Hippogriff may produce
166
+ socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
167
+
168
+ As with all language models, it is hard to predict in advance how Hippogriff will respond to particular prompts and offensive content may occur without warning.
169
+ We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
170
+
171
+ ## Examples
172
+
173
+ TBD
174
+
175
+ ```
176
+ USER: write an original short story of at least 300 words in the style of Zora Neale Hurston. Incorporate the following literary devices: Euphemism and Foreshadowing.
177
+ ASSISTANT: Once upon a time, there was a black girl named Rose, whose skin shined like a pearl beneath the hot sun. She lived with her mama and daddy on their farm, where they grew crops to feed the family and neighbors. On this particular day, Rose felt a tinge of excitement as she watched her parents work diligently in their fields. 'It's almost harvest season!' she thought, imagining the delicious meals that awaited them. But, soon after, it became apparent that something sinister lurked amidst the rows of fresh produce.
178
+
179
+ As her father hoed around a vine-covered cucumber plant, he unearthed a peculiar rock. It was the size of an acorn but had a strange shape, unlike anything he or his wife had ever seen. "What have we here?" He said, turning to his wife with curiosity and excitement. She examined the rock for herself and declared, "Why, it looks like a piece of coal!" But as both mother and daughter looked closer, it seemed to be far more than a mere piece of coal. A sense of unease crept up Rose's spine when she noticed that the rock resembled the dark face of a baby in the womb. As Rose tried to suppress a scream, her mother gasped, "Sweet Jesus! Let's bury this thing before anyone sees!" And so, they quickly hid the stone back underground, hoping that nobody would ever discover their secret.
180
+ ```
181
+
182
+ ```
183
+ USER: tell me 5 words that start with "ex" and then the sum of 33 and 77
184
+ ASSISTANT: Excite, exotic, exhibit, excitable, excessive
185
+ ```
186
+
187
+
188
+ ```
189
+ USER: sally (a girl) has 3 brothers. Each brother has 2 sisters. how many sisters does sally have?
190
+ ASSISTANT: Sally has 3 brothers, and each brother has 2 sisters. To find out how many sisters Sally has in total, we need to multiply the number of bro
191
+ thers by the number of sisters per brother, then add 1 to account for Sally herself. So, 3 * 2 + 1 = 6 sisters in total
192
+ ```
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/workspace/mnt/manticore-30b-chat/checkpoint-50/",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 6656,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 17920,
12
+ "max_position_embeddings": 2048,
13
+ "max_sequence_length": 2048,
14
+ "model_type": "llama",
15
+ "num_attention_heads": 52,
16
+ "num_hidden_layers": 60,
17
+ "pad_token_id": 0,
18
+ "rms_norm_eps": 1e-06,
19
+ "tie_word_embeddings": false,
20
+ "torch_dtype": "bfloat16",
21
+ "transformers_version": "4.30.0.dev0",
22
+ "use_cache": false,
23
+ "vocab_size": 32000
24
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.30.0.dev0"
7
+ }
hippogriff-30b-GPTQ-4bit.act.order.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d45613c3b91f188a7612b7ab0f36332cb795f87b509924de5dbf4591eb21acae
3
+ size 16940128528
quantize_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bits": 4,
3
+ "group_size": -1,
4
+ "damp_percent": 0.01,
5
+ "desc_act": true,
6
+ "sym": true,
7
+ "true_sequential": true
8
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "bos_token": {
5
+ "__type": "AddedToken",
6
+ "content": "<s>",
7
+ "lstrip": false,
8
+ "normalized": true,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "clean_up_tokenization_spaces": false,
13
+ "eos_token": {
14
+ "__type": "AddedToken",
15
+ "content": "</s>",
16
+ "lstrip": false,
17
+ "normalized": true,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "model_max_length": 2048,
22
+ "pad_token": null,
23
+ "sp_model_kwargs": {},
24
+ "tokenizer_class": "LlamaTokenizer",
25
+ "unk_token": {
26
+ "__type": "AddedToken",
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": true,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }