bhenrym14 commited on
Commit
f0f44f9
1 Parent(s): 3d4ad3c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -0
README.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - jondurbin/airoboros-gpt4-1.4.1
4
+ - ehartford/dolphin
5
+ ---
6
+
7
+
8
+ # Airophin: A NTK-by-Parts RoPE Scaled QLoRA Fine-tune of Llama-2-13b (LoRA weights)
9
+
10
+
11
+ <!-- LoRA Weights can be found here: https://huggingface.co/bhenrym14/airophin-13b-pntk-16k-LoRA -->
12
+ GPTQ weights can be found here: https://huggingface.co/bhenrym14/airophin-13b-pntk-16k-GPTQ
13
+ fp16 weights can be found here: https://huggingface.co/bhenrym14/airophin-13b-pntk-16k-fp16
14
+
15
+ ## Overview
16
+
17
+ This is a finetune of Llama-2-13b, intended to extend the useful context window to 16384 tokens. There are two training phases:
18
+ 1. It is first trained on a long-context (7000-8192 tokens) subset of [dolphin](https://huggingface.co/datasets/ehartford/dolphin), an orca-like dataset (GPT4 split only). This amounts to roughly 110mm tokens. Airoboros-like training prompt was used instead of the dolphin system prompt. Training was done with partial NTK scaling applied (scale factor of 4). This took ~20 hours.
19
+ 2. The model was then finetuned on [Jon Durbin's Airoboros GPT4 1.4.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1), with same scaling approach, for 2 epochs. This took ~15 hours.
20
+
21
+ **This is a QLoRA fine-tune (rank 64)**.
22
+
23
+ All training was performed with 1x RTX 6000 Ada.
24
+
25
+ For full model card, including how to use PNTK, see any of the two merged models linked above.