bhenrym14's picture
Create README.md
f0f44f9
|
raw
history blame
No virus
1.36 kB
metadata
datasets:
  - jondurbin/airoboros-gpt4-1.4.1
  - ehartford/dolphin

Airophin: A NTK-by-Parts RoPE Scaled QLoRA Fine-tune of Llama-2-13b (LoRA weights)

GPTQ weights can be found here: https://huggingface.co/bhenrym14/airophin-13b-pntk-16k-GPTQ fp16 weights can be found here: https://huggingface.co/bhenrym14/airophin-13b-pntk-16k-fp16

Overview

This is a finetune of Llama-2-13b, intended to extend the useful context window to 16384 tokens. There are two training phases:

  1. It is first trained on a long-context (7000-8192 tokens) subset of dolphin, an orca-like dataset (GPT4 split only). This amounts to roughly 110mm tokens. Airoboros-like training prompt was used instead of the dolphin system prompt. Training was done with partial NTK scaling applied (scale factor of 4). This took ~20 hours.
  2. The model was then finetuned on Jon Durbin's Airoboros GPT4 1.4.1, with same scaling approach, for 2 epochs. This took ~15 hours.

This is a QLoRA fine-tune (rank 64).

All training was performed with 1x RTX 6000 Ada.

For full model card, including how to use PNTK, see any of the two merged models linked above.