dongxiaoqun
commited on
Commit
•
e142f2c
1
Parent(s):
daaf4f3
Update README.md
Browse files
README.md
CHANGED
@@ -5,19 +5,6 @@ tags:
|
|
5 |
inference: True
|
6 |
---
|
7 |
|
8 |
-
|
9 |
-
Randeng_Pegasus_523M_Summary model (Chinese),which codes has merged into [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
|
10 |
-
|
11 |
-
The 523M million parameter randeng_pegasus_large model, training with sampled gap sentence ratios on 180G Chinese data, and stochastically sample important sentences. The pretraining task just same as the paper [PEGASUS: Pre-training with Extracted Gap-sentences for
|
12 |
-
Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) mentioned.
|
13 |
-
|
14 |
-
Different from the English version of pegasus, considering that the Chinese sentence piece is unstable, we use jieba and Bertokenizer as the tokenizer in chinese pegasus model.
|
15 |
-
|
16 |
-
This model we provided in hugging face hub is only the pretrained model, has not finetuned with download data yet.
|
17 |
-
|
18 |
-
We also pretained a base model, available with [Randeng_Pegasus_238M_Summary](https://huggingface.co/IDEA-CCNL/Randeng_Pegasus_238M_Summary)
|
19 |
-
|
20 |
-
|
21 |
Task: Summarization
|
22 |
|
23 |
## Usage
|
|
|
5 |
inference: True
|
6 |
---
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
Task: Summarization
|
9 |
|
10 |
## Usage
|