metterian commited on
Commit
9246209
1 Parent(s): 916cdc8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -8
README.md CHANGED
@@ -39,6 +39,19 @@ LLaMA-Pro-Ko's performance is evaluated on two fronts: its proficiency in Englis
39
 
40
  ### Korean Evaluation
41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  #### KoBEST
43
 
44
  **5shot**
@@ -61,14 +74,6 @@ LLaMA-Pro-Ko's performance is evaluated on two fronts: its proficiency in Englis
61
 
62
 
63
 
64
- #### Open Ko LLM Benchmark
65
-
66
- | | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | AVG |
67
- | ------------------------------------------------------------ | --------- | ------------ | --------- | ------------- | --------------- | --------- |
68
- | [Llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) | 31.91 | 41.68 | 34.11 | 48.49 | 30.34 | 37.31 |
69
- | [beomi/open-llama-2-ko-7b](https://huggingface.co/beomi/open-llama-2-ko-7b) | 40.02 | 50.27 | 27.60 | 38.67 | 42.15 | 39.74 |
70
- | llama-pro-ko-8b | **40.19** | **51.26** | **36.80** | **40.24** | **43.8** | **42.46** |
71
-
72
 
73
 
74
  ### English Evaluation
 
39
 
40
  ### Korean Evaluation
41
 
42
+
43
+ #### Open Ko LLM Benchmark
44
+
45
+ | | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | AVG |
46
+ | ------------------------------------------------------------ | --------- | ------------ | --------- | ------------- | --------------- | --------- |
47
+ | [Llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) | 31.91 | 41.68 | 34.11 | 48.49 | 30.34 | 37.31 |
48
+ | [beomi/open-llama-2-ko-7b](https://huggingface.co/beomi/open-llama-2-ko-7b) | 40.02 | 50.27 | 27.60 | 38.67 | 42.15 | 39.74 |
49
+ | llama-pro-ko-8b | **40.19** | **51.26** | **36.80** | **40.24** | **43.8** | **42.46** |
50
+
51
+
52
+
53
+
54
+
55
  #### KoBEST
56
 
57
  **5shot**
 
74
 
75
 
76
 
 
 
 
 
 
 
 
 
77
 
78
 
79
  ### English Evaluation