ChuckMcSneed commited on
Commit
7dd26c0
1 Parent(s): 576b8fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -173,7 +173,7 @@ Alpaca.
173
  | P | 6 | 4.75 |4.25 |5.25 |5.25 |5.5|5|
174
  | Total | 17 | 16.5 |14.5 |18.5 |16.5 |16|18.25|
175
 
176
- ## Open LLM leaderboard
177
  [Leaderboard on Huggingface](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
178
  |Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
179
  |---------------------------------------|-------|-----|---------|-----|----------|----------|-----|
@@ -182,7 +182,7 @@ Alpaca.
182
  |Difference |0.18 |0.43 |-0.14 |-0.16|-0.93 |-0.23 |2.12 |
183
 
184
  Performance here is decent. It was #5 on the leaderboard among 70b models when I submitted it. This leaderboard is currently quite useless though, some 7b braindead meme merges have high scores there, claiming to be the next GPT4. At least I don't pretend that my models aren't a meme.
185
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
186
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChuckMcSneed__SMaxxxer-v1-70b)
187
 
188
  | Metric |Value|
 
173
  | P | 6 | 4.75 |4.25 |5.25 |5.25 |5.5|5|
174
  | Total | 17 | 16.5 |14.5 |18.5 |16.5 |16|18.25|
175
 
176
+ ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
177
  [Leaderboard on Huggingface](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
178
  |Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
179
  |---------------------------------------|-------|-----|---------|-----|----------|----------|-----|
 
182
  |Difference |0.18 |0.43 |-0.14 |-0.16|-0.93 |-0.23 |2.12 |
183
 
184
  Performance here is decent. It was #5 on the leaderboard among 70b models when I submitted it. This leaderboard is currently quite useless though, some 7b braindead meme merges have high scores there, claiming to be the next GPT4. At least I don't pretend that my models aren't a meme.
185
+
186
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChuckMcSneed__SMaxxxer-v1-70b)
187
 
188
  | Metric |Value|