aseker00 commited on
Commit
8b36f5b
1 Parent(s): 34e548d

First version of tokenizer and basic pytorch model.

Browse files
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - he
4
+ tags:
5
+ - language model
6
+ license: apache-2.0
7
+ datasets:
8
+ - oscar
9
+ - wikipedia
10
+ - twitter
11
+ ---
12
+
13
+ # AlephBERT
14
+
15
+ ## Hebrew Language Model
16
+
17
+ State-of-the-art language model for Hebrew. Based on BERT.
18
+
19
+ #### How to use
20
+
21
+ ```python
22
+ from transformers import BertModel, BertTokenizerFast
23
+
24
+ alephbert_tokenizer = BertTokenizerFast.from_pretrained('onlplab/alephbert-base')
25
+ alephbert = BertModel.from_pretrained('onlplab/alephbert-base')
26
+
27
+ # if not finetuning - disable dropout
28
+ alephbert.eval()
29
+ ```
30
+
31
+ ## Training data
32
+
33
+ - OSCAR (10G text, 20M sentences)
34
+ - Wikipedia dump (0.6G text, 3M sentences)
35
+ - Tweets (7G text, 70M sentences)
36
+
37
+ ## Training procedure
38
+
39
+ Trained on a DGX machine (8 V100 GPUs) using the standard huggingface training procedure.
40
+
41
+ To optimize training time we split the data into 4 sections based on max number of tokens:
42
+
43
+ 1. num tokens < 32 (70M sentences)
44
+ 2. 32 <= num tokens < 64 (12M sentences)
45
+ 3. 64 <= num tokens < 128 (10M sentences)
46
+ 4. 128 <= num tokens < 512 (70M sentences)
47
+
48
+ Each section was trained for 5 epochs with an initial learning rate set to 1e-4.
49
+
50
+ Total training time was 5 days.
51
+
52
+ ## Eval
53
+
54
+
config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForMaskedLM"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "gradient_checkpointing": false,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 768,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 3072,
12
+ "layer_norm_eps": 1e-12,
13
+ "max_position_embeddings": 512,
14
+ "model_type": "bert",
15
+ "num_attention_heads": 12,
16
+ "num_hidden_layers": 12,
17
+ "pad_token_id": 0,
18
+ "position_embedding_type": "absolute",
19
+ "transformers_version": "4.2.2",
20
+ "type_vocab_size": 1,
21
+ "use_cache": true,
22
+ "vocab_size": 52000
23
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1aa3553477b7a7d8adf3b903763689c9e88790a57a874462ab8c6302a2d85882
3
+ size 504210578
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "max_len": 512, "special_tokens_map_file": null, "do_basic_tokenize": true, "never_split": null}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d8a35bf76922964d15f5c793398da780500cd65ef652c7e9b38bf4c2abaca23
3
+ size 2095
vocab.txt ADDED
The diff for this file is too large to render. See raw diff