buruzaemon commited on
Commit
4445a57
1 Parent(s): 4eabcaf

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -8
README.md CHANGED
@@ -34,7 +34,7 @@ It achieves the following results on the evaluation set:
34
 
35
  ## Model description
36
 
37
- This is an initial example of knowledge-distillation where the student loss is all cross-entropy loss \\(L_{CE}\\) of the ground-truth labels and none of the knowledge-distillation loss \\(L_{KD}\\).
38
 
39
  ## Intended uses & limitations
40
 
@@ -42,25 +42,20 @@ More information needed
42
 
43
  ## Training and evaluation data
44
 
45
- The training and evaluation data come straight from the `train` and `validation` splits in the clinc_oos dataset, respectively; and tokenized using the `distilbert-base-uncased` tokenization.
46
 
47
  ## Training procedure
48
 
49
- Please see page 224 in Chapter 8: Making Transformers Efficient in Production, Natural Language Processing with Transformers, May 2022.
50
-
51
  ### Training hyperparameters
52
 
53
  The following hyperparameters were used during training:
54
- - num_epochs: 5
55
- - alpha: 1.0
56
- - temperature: 2.0
57
  - learning_rate: 2e-05
58
  - train_batch_size: 48
59
  - eval_batch_size: 48
60
  - seed: 8675309
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
-
64
 
65
  ### Training results
66
 
 
34
 
35
  ## Model description
36
 
37
+ More information needed
38
 
39
  ## Intended uses & limitations
40
 
 
42
 
43
  ## Training and evaluation data
44
 
45
+ More information needed
46
 
47
  ## Training procedure
48
 
 
 
49
  ### Training hyperparameters
50
 
51
  The following hyperparameters were used during training:
 
 
 
52
  - learning_rate: 2e-05
53
  - train_batch_size: 48
54
  - eval_batch_size: 48
55
  - seed: 8675309
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - lr_scheduler_type: linear
58
+ - num_epochs: 5
59
 
60
  ### Training results
61