pascalrai commited on
Commit
edbb3fc
1 Parent(s): 408b77f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -101,7 +101,7 @@ The model was pre-trained continuously on a single A10G GPU in an AWS instance f
101
  <br> There could be two reasons for this:
102
 
103
  - There is still room for improving the quality of the data.
104
- - It's seen that we still do not have enough data for generalization as Transformer models only perform well with large amounts of pre-trained data compared with Classical Sequential Models.
105
 
106
  #### Authors:
107
 
 
101
  <br> There could be two reasons for this:
102
 
103
  - There is still room for improving the quality of the data.
104
+ - We still do not have enough data for generalization as Transformer models only perform well with large amounts of pre-trained data compared with Classical Sequential Models.
105
 
106
  #### Authors:
107