ajitrajasekharan commited on
Commit
a23c1dc
1 Parent(s): c22c86d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -25,7 +25,7 @@ This model was pretrained from scratch using a custom vocabulary on the followin
25
  - Clinical trials corpus
26
  - and a small subset of Bookcorpus
27
 
28
- This pretrained model was used to do NER as is, **with no fine-tuning** as described [in this post](https://ajitrajasekharan.github.io/2021/01/02/my-first-post.html). [Towards Data Science review](https://twitter.com/TDataScience/status/1486300137366466560?s=20)
29
 
30
  [Github link](https://github.com/ajitrajasekharan/unsupervised_NER) to perform NER using this model in an ensemble with bert-base cased.
31
 
@@ -33,8 +33,12 @@ The ensemble detects 69 entity subtypes (17 broad entity groups)
33
 
34
  <img src="https://ajitrajasekharan.github.io/images/1.png" width="600">
35
 
36
- **Ensemble model performance**
37
 
38
  <img src="https://ajitrajasekharan.github.io/images/6.png" width="600">
39
 
 
 
 
 
40
 
 
25
  - Clinical trials corpus
26
  - and a small subset of Bookcorpus
27
 
28
+ The pretrained model was used to do NER **as is, with no fine-tuning**. The approach is described [in this post](https://ajitrajasekharan.github.io/2021/01/02/my-first-post.html). [Towards Data Science review](https://twitter.com/TDataScience/status/1486300137366466560?s=20)
29
 
30
  [Github link](https://github.com/ajitrajasekharan/unsupervised_NER) to perform NER using this model in an ensemble with bert-base cased.
31
 
 
33
 
34
  <img src="https://ajitrajasekharan.github.io/images/1.png" width="600">
35
 
36
+ ### Ensemble model performance
37
 
38
  <img src="https://ajitrajasekharan.github.io/images/6.png" width="600">
39
 
40
+ ### License
41
+
42
+ MIT license
43
+
44