Update README.md
Browse files
README.md
CHANGED
@@ -92,23 +92,25 @@ python main.py \
|
|
92 |
|
93 |
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|
94 |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
|
|
|
95 |
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) † | 1.2B | 0.6696 | 0.6477 | 0.6419 | 0.6514 |
|
96 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) * | 6.0B | 0.7345 | 0.7287 | 0.7277 | 0.7479 |
|
97 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (ours) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
|
98 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (ours) | **3.8B** | **0.7595** | **0.7608** | **0.7638** | **0.7788** |
|
99 |
|
100 |
-
<img src="https://user-images.githubusercontent.com/19511788/
|
101 |
|
102 |
### HellaSwag (F1)
|
103 |
|
104 |
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|
105 |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
|
|
|
106 |
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) † | 1.2B | 0.4036 | 0.4 | 0.4011 | 0.4214 |
|
107 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) * | 6.0B | 0.4599 | 0.456 | 0.4616 | 0.4754 |
|
108 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (ours) | 1.3B | 0.4013 | 0.3984 | 0.417 | 0.4416 |
|
109 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (ours) | **3.8B** | **0.4438** | **0.4786** | **0.4737** | **0.4822** |
|
110 |
|
111 |
-
<img src="https://user-images.githubusercontent.com/19511788/
|
112 |
|
113 |
<p><strong>†</strong> The model card of this model provides evaluation results for the KOBEST dataset, but when we evaluated the model with the prompts described in the paper, we can't get similar results to it. Therefore, we checked the KOBEST paper and found that the results were similar to the fine-tuning results reported in the paper. Because we evaluated by prompt-based generation without fine-tuning the model, the results provided by the model card for the this model may differ.</p>
|
114 |
|
|
|
92 |
|
93 |
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|
94 |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
|
95 |
+
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 |
|
96 |
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) † | 1.2B | 0.6696 | 0.6477 | 0.6419 | 0.6514 |
|
97 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) * | 6.0B | 0.7345 | 0.7287 | 0.7277 | 0.7479 |
|
98 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (ours) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
|
99 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (ours) | **3.8B** | **0.7595** | **0.7608** | **0.7638** | **0.7788** |
|
100 |
|
101 |
+
<img src="https://user-images.githubusercontent.com/19511788/192492576-cdd80c5c-7c90-43e3-8a4b-7a8486878f23.png" width="800px">
|
102 |
|
103 |
### HellaSwag (F1)
|
104 |
|
105 |
| Model | params | 0-shot | 5-shot | 10-shot | 50-shot |
|
106 |
|----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
|
107 |
+
| [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.4261 | 0.437 | 0.4409 | 0.4517 |
|
108 |
| [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) † | 1.2B | 0.4036 | 0.4 | 0.4011 | 0.4214 |
|
109 |
| [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) * | 6.0B | 0.4599 | 0.456 | 0.4616 | 0.4754 |
|
110 |
| [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (ours) | 1.3B | 0.4013 | 0.3984 | 0.417 | 0.4416 |
|
111 |
| [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (ours) | **3.8B** | **0.4438** | **0.4786** | **0.4737** | **0.4822** |
|
112 |
|
113 |
+
<img src="https://user-images.githubusercontent.com/19511788/192492585-a976ee38-2967-446a-b577-94f219228f4d.png" width="800px">
|
114 |
|
115 |
<p><strong>†</strong> The model card of this model provides evaluation results for the KOBEST dataset, but when we evaluated the model with the prompts described in the paper, we can't get similar results to it. Therefore, we checked the KOBEST paper and found that the results were similar to the fine-tuning results reported in the paper. Because we evaluated by prompt-based generation without fine-tuning the model, the results provided by the model card for the this model may differ.</p>
|
116 |
|