yinnxinn commited on
Commit
abec3fa
1 Parent(s): c3f49db

1.3B translate error

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -31,7 +31,7 @@ The MacBERT with 325M parameters is pre-trained for Chinese NLI tasks, and finet
31
  ## 模型信息 Model Information
32
 
33
 
34
- 为了提高模型在NLI上的效果,我们收集了大量NLI进行预训练,随后在FewCLUE的OCNLI任务进行微调,所有的训练均基于我们提出的UniMC框架。最终结果表明,3.25亿参数的模型通过我们的训练策略在NLI任务上可以达到1.3亿参数大模型相当的效果。
35
 
36
  To improve the model performance on the NLI task, we collected numerous NLI datasets for pre-training. Then the model was finetuned on a specific NLI task, OCNLI from FewCLUE. All the training is based on the UniMC framework we proposed. The results show that our model with 325M parameters could achieve comparable performance to the model with 1.3B parameters on the NLI task via our training strategies.
37
 
 
31
  ## 模型信息 Model Information
32
 
33
 
34
+ 为了提高模型在NLI上的效果,我们收集了大量NLI进行预训练,随后在FewCLUE的OCNLI任务进行微调,所有的训练均基于我们提出的UniMC框架。最终结果表明,3.25亿参数的模型通过我们的训练策略在NLI任务上可以达到130亿参数大模型相当的效果。
35
 
36
  To improve the model performance on the NLI task, we collected numerous NLI datasets for pre-training. Then the model was finetuned on a specific NLI task, OCNLI from FewCLUE. All the training is based on the UniMC framework we proposed. The results show that our model with 325M parameters could achieve comparable performance to the model with 1.3B parameters on the NLI task via our training strategies.
37