metterian commited on
Commit
916cdc8
1 Parent(s): 6aad915

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -1
README.md CHANGED
@@ -52,13 +52,25 @@ LLaMA-Pro-Ko's performance is evaluated on two fronts: its proficiency in Englis
52
 
53
  **10shot**
54
 
55
- | | # tokens | copa | HellaSwag | boolq | sentiNeg | AVG |
56
  | ------------------------------------------------------------ | :------: | :------: | :-------: | :---------: | :---------: | ------------ |
57
  | [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) | 20B | 0.78 | 0.47 | <u>0.68</u> | 0.87 | 70.12 |
58
  | [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) | 40B | **0.80** | 0.47 | **0.71** | 0.73 | 67.81 |
59
  | [beomi/open-llama-2-ko-7b](https://huggingface.co/beomi/open-llama-2-ko-7b) | 15B | 0.79 | **0.48** | 0.67 | <u>0.94</u> | **71.82** |
60
  | llama-pro-ko-8b | 10B | **0.80** | **0.48** | 0.60 | **0.97** | <u>71.12</u> |
61
 
 
 
 
 
 
 
 
 
 
 
 
 
62
  ### English Evaluation
63
 
64
  #### Open LLM Benchmark
 
52
 
53
  **10shot**
54
 
55
+ | | # tokens | copa | HellaSwag | boolq | sentiNeg | AVG |
56
  | ------------------------------------------------------------ | :------: | :------: | :-------: | :---------: | :---------: | ------------ |
57
  | [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) | 20B | 0.78 | 0.47 | <u>0.68</u> | 0.87 | 70.12 |
58
  | [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) | 40B | **0.80** | 0.47 | **0.71** | 0.73 | 67.81 |
59
  | [beomi/open-llama-2-ko-7b](https://huggingface.co/beomi/open-llama-2-ko-7b) | 15B | 0.79 | **0.48** | 0.67 | <u>0.94</u> | **71.82** |
60
  | llama-pro-ko-8b | 10B | **0.80** | **0.48** | 0.60 | **0.97** | <u>71.12</u> |
61
 
62
+
63
+
64
+ #### Open Ko LLM Benchmark
65
+
66
+ | | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | AVG |
67
+ | ------------------------------------------------------------ | --------- | ------------ | --------- | ------------- | --------------- | --------- |
68
+ | [Llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) | 31.91 | 41.68 | 34.11 | 48.49 | 30.34 | 37.31 |
69
+ | [beomi/open-llama-2-ko-7b](https://huggingface.co/beomi/open-llama-2-ko-7b) | 40.02 | 50.27 | 27.60 | 38.67 | 42.15 | 39.74 |
70
+ | llama-pro-ko-8b | **40.19** | **51.26** | **36.80** | **40.24** | **43.8** | **42.46** |
71
+
72
+
73
+
74
  ### English Evaluation
75
 
76
  #### Open LLM Benchmark