Update README.md
Browse files
README.md
CHANGED
@@ -101,7 +101,7 @@ The model was pre-trained continuously on a single A10G GPU in an AWS instance f
|
|
101 |
<br> There could be two reasons for this:
|
102 |
|
103 |
- There is still room for improving the quality of the data.
|
104 |
-
-
|
105 |
|
106 |
#### Authors:
|
107 |
|
|
|
101 |
<br> There could be two reasons for this:
|
102 |
|
103 |
- There is still room for improving the quality of the data.
|
104 |
+
- We still do not have enough data for generalization as Transformer models only perform well with large amounts of pre-trained data compared with Classical Sequential Models.
|
105 |
|
106 |
#### Authors:
|
107 |
|