Adding Evaluation Results

#4
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -39,4 +39,17 @@ Remember that with lower parameter sizes, the structure of the prompt becomes mo
39
  - "Compare and contrast your answer with alternatives"
40
 
41
  ### Coming Soon
42
- - Tweet fix for 13B and 7B - lower model sizes seem to be extremely sensitive to hashtags at the end of training data responses, especially at longer cutoffs
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  - "Compare and contrast your answer with alternatives"
40
 
41
  ### Coming Soon
42
+ - Tweet fix for 13B and 7B - lower model sizes seem to be extremely sensitive to hashtags at the end of training data responses, especially at longer cutoffs
43
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
44
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ausboss__llama-30b-supercot)
45
+
46
+ | Metric | Value |
47
+ |-----------------------|---------------------------|
48
+ | Avg. | 53.06 |
49
+ | ARC (25-shot) | 64.85 |
50
+ | HellaSwag (10-shot) | 85.08 |
51
+ | MMLU (5-shot) | 56.56 |
52
+ | TruthfulQA (0-shot) | 53.96 |
53
+ | Winogrande (5-shot) | 80.03 |
54
+ | GSM8K (5-shot) | 11.9 |
55
+ | DROP (3-shot) | 19.07 |