hiroshi-matsuda-rit commited on
Commit
c9d0124
1 Parent(s): 289209d

initial commit

Browse files
Files changed (6) hide show
  1. README.md +67 -0
  2. config.json +26 -0
  3. pytorch_model.bin +3 -0
  4. special_tokens_map.json +1 -0
  5. tokenizer_config.json +11 -0
  6. vocab.txt +0 -0
README.md CHANGED
@@ -1,3 +1,70 @@
1
  ---
 
2
  license: mit
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: ja
3
  license: mit
4
+ datasets:
5
+ - mC4-ja
6
  ---
7
+
8
+ # electra-base-japanese-discriminator (sudachitra-wordpiece, mC4 Japanese) - [SHINOBU](https://dl.ndl.go.jp/info:ndljp/pid/1302683/3)
9
+
10
+ This is an [ELECTRA](https://github.com/google-research/electra) model pretrained on approximately 200M Japanese sentences.
11
+
12
+ The input text is tokenized by [SudachiTra](https://github.com/WorksApplications/SudachiTra) with the WordPiece subword tokenizer.
13
+ See `tokenizer_config.json` for the setting details.
14
+
15
+ ## How to use
16
+
17
+ Please install `SudachiTra` in advance.
18
+
19
+ ```console
20
+ $ pip install -U torch transformers sudachitra
21
+ ```
22
+
23
+ You can load the model and the tokenizer via AutoModel and AutoTokenizer, respectively.
24
+
25
+ ```python
26
+ from transformers import AutoModel, AutoTokenizer
27
+ model = AutoModel.from_pretrained("megagonlabs/electra-base-japanese-discriminator")
28
+ tokenizer = AutoTokenizer.from_pretrained("megagonlabs/electra-base-japanese-discriminator", trust_remote_code=True)
29
+ model(**tokenizer("まさにオールマイティーな商品だ。", return_tensors="pt")).last_hidden_state
30
+ tensor([[[-0.0498, -0.0285, 0.1042, ..., 0.0062, -0.1253, 0.0338],
31
+ [-0.0686, 0.0071, 0.0087, ..., -0.0210, -0.1042, -0.0320],
32
+ [-0.0636, 0.1465, 0.0263, ..., 0.0309, -0.1841, 0.0182],
33
+ ...,
34
+ [-0.1500, -0.0368, -0.0816, ..., -0.0303, -0.1653, 0.0650],
35
+ [-0.0457, 0.0770, -0.0183, ..., -0.0108, -0.1903, 0.0694],
36
+ [-0.0981, -0.0387, 0.1009, ..., -0.0150, -0.0702, 0.0455]]],
37
+ grad_fn=<NativeLayerNormBackward>)
38
+ ```
39
+
40
+ ## Model architecture
41
+
42
+ The model architecture is the same as the original ELECTRA base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
43
+
44
+ ## Training data and libraries
45
+
46
+ This model is trained on the Japanese texts extracted from the [mC4](https://huggingface.co/datasets/mc4) Common Crawl's multilingual web crawl corpus.
47
+ We used the [Sudachi](https://github.com/WorksApplications/Sudachi) to split texts into sentences, and also applied a simple rule-based filter to remove nonlinguistic segments of mC4 multilingual corpus.
48
+ The extracted texts contains over 600M sentences in total, and we used approximately 200M sentences for pretraining.
49
+
50
+ We used [NVIDIA's TensorFlow2-based ELECTRA implementation](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow2/LanguageModeling/ELECTRA) for pretraining. The time required for the pretrainig was about 110 hours using GCP DGX A100 8gpu instance with enabling Automatic Mixed Precision.
51
+
52
+ ## Licenses
53
+
54
+ The pretrained models are distributed under the terms of the [MIT License](https://opensource.org/licenses/mit-license.php).
55
+
56
+ ## Citations
57
+
58
+ - mC4
59
+
60
+ Contains information from `mC4` which is made available under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/).
61
+ ```
62
+ @article{2019t5,
63
+ author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
64
+ title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
65
+ journal = {arXiv e-prints},
66
+ year = {2019},
67
+ archivePrefix = {arXiv},
68
+ eprint = {1910.10683},
69
+ }
70
+ ```
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "ElectraForPreTraining"
4
+ ],
5
+ "model_type": "electra",
6
+ "model_name": "base",
7
+ "vocab_size": 30112,
8
+ "embedding_size": 768,
9
+ "hidden_size": 768,
10
+ "num_hidden_layers": 12,
11
+ "num_attention_heads": 12,
12
+ "intermediate_size": 3072,
13
+ "hidden_act": "gelu",
14
+ "hidden_dropout_prob": 0.1,
15
+ "attention_probs_dropout_prob": 0.1,
16
+ "max_position_embeddings": 512,
17
+ "type_vocab_size": 2,
18
+ "initializer_range": 0.02,
19
+ "layer_norm_eps": 1e-12,
20
+ "summary_type": "first",
21
+ "summary_use_proj": true,
22
+ "summary_activation": "gelu",
23
+ "summary_last_dropout": 0.1,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute"
26
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4412201f4146092b562f92ec0ea5be3ad697cd0c83c4817a5ea411375900cd9
3
+ size 436755117
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "tokenizer_class": "BertJapaneseTokenizer",
3
+ "do_lower_case": false,
4
+ "do_word_tokenize": true,
5
+ "do_subword_tokenize": true,
6
+ "word_tokenizer_type": "sudachi",
7
+ "subword_tokenizer_type": "wordpiece",
8
+ "model_max_length": 512,
9
+ "sudachi_kwargs": {"sudachi_split_mode":"A","sudachi_dict_type":"core"}
10
+ }
11
+
vocab.txt ADDED
The diff for this file is too large to render. See raw diff