buruzaemon commited on
Commit
1e07ac4
1 Parent(s): adef6b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -22,7 +22,7 @@ It achieves the following results on the evaluation set:
22
 
23
  ## Model description
24
 
25
- More information needed
26
 
27
  ## Intended uses & limitations
28
 
@@ -30,20 +30,24 @@ More information needed
30
 
31
  ## Training and evaluation data
32
 
33
- More information needed
34
 
35
  ## Training procedure
36
 
 
 
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
 
 
 
40
  - learning_rate: 2e-05
41
  - train_batch_size: 48
42
  - eval_batch_size: 48
43
  - seed: 8675309
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
- - num_epochs: 5
47
 
48
  ### Training results
49
 
 
22
 
23
  ## Model description
24
 
25
+ This is an initial example of knowledge-distillation where the student loss is all cross-entropy loss \\(L_{CE}\\) of the ground-truth labels and none of the knowledge-distillation loss \\(L_{KD}\\).
26
 
27
  ## Intended uses & limitations
28
 
 
30
 
31
  ## Training and evaluation data
32
 
33
+ The training and evaluation data come straight from the `train` and `validation` splits in the clinc_oos dataset, respectively; and tokenized using the `distilbert-base-uncased` tokenization.
34
 
35
  ## Training procedure
36
 
37
+ Please see page 224 in Chapter 8: Making Transformers Efficient in Production, Natural Language Processing with Transformers, May 2022.
38
+
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
+ - num_epochs: 5
43
+ - alpha: 1.0
44
+ - temperature: 2.0
45
  - learning_rate: 2e-05
46
  - train_batch_size: 48
47
  - eval_batch_size: 48
48
  - seed: 8675309
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
 
51
 
52
  ### Training results
53