saribasmetehan commited on
Commit
eaa6e6d
1 Parent(s): 20a2369

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -11
README.md CHANGED
@@ -23,6 +23,8 @@ model-index:
23
  - name: F1
24
  type: f1
25
  value: 0.7821495486288537
 
 
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -37,16 +39,58 @@ It achieves the following results on the evaluation set:
37
 
38
  ## Model description
39
 
40
- More information needed
41
-
42
- ## Intended uses & limitations
43
-
44
- More information needed
45
-
46
- ## Training and evaluation data
47
-
48
- More information needed
49
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  ## Training procedure
51
 
52
  ### Training hyperparameters
@@ -75,4 +119,4 @@ The following hyperparameters were used during training:
75
  - Transformers 4.41.2
76
  - Pytorch 2.3.0+cu121
77
  - Datasets 2.19.2
78
- - Tokenizers 0.19.1
 
23
  - name: F1
24
  type: f1
25
  value: 0.7821495486288537
26
+ language:
27
+ - tr
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
39
 
40
  ## Model description
41
 
42
+ This model is a fine-tuned version of dbmdz/bert-base-turkish-uncased on the turkish-wiki_ner dataset. The training dataset consists of 18,967 samples, and the validation dataset consists of 1,000 samples, both derived from Wikipedia data.
43
+
44
+
45
+ For more detailed information, please visit this link: https://huggingface.co/datasets/turkish-nlp-suite/turkish-wikiNER
46
+ -
47
+ Labels:
48
+
49
+ <ul>
50
+ <li>CARDINAL</li>
51
+ <li>DATE</li>
52
+ <li>EVENT</li>
53
+ <li>FAC</li>
54
+ <li>GPE</li>
55
+ <li>LANGUAGE</li>
56
+ <li>LAW</li>
57
+ <li>LOC</li>
58
+ <li>MONEY</li>
59
+ <li>NORP</li>
60
+ <li>ORDINAL</li>
61
+ <li>ORG</li>
62
+ <li>PERCENT</li>
63
+ <li>PERSON</li>
64
+ <li>PRODUCT</li>
65
+ <li>QUANTITY</li>
66
+ <li>TIME</li>
67
+ <li>TITLE</li>
68
+ <li>WORK_OF_ART</li>
69
+ </ul>
70
+
71
+ Fine-Tuning Process : https://github.com/saribasmetehan/bert-base-turkish-uncased-ner
72
+ -
73
+ ## Example
74
+ ```markdown
75
+ from transformers import pipeline
76
+ text = "Bu toplam sıfır ise, Newton'ın birinci yasası cismin hareket durumunun değişmeyeceğini söyler."
77
+ model_id = saribasmetehan/bert-base-turkish-uncased-ner"
78
+ ner = pipeline("ner",model = model_id)
79
+ preds= ner(text, aggregation_strategy = "simple")
80
+
81
+ pd.DataFrame(preds)
82
+
83
+ ```
84
+
85
+ ## Load model directly
86
+ ```markdown
87
+ from transformers import AutoModelForTokenClassification, AutoTokenizer
88
+
89
+ model_name = "saribasmetehan/bert-base-turkish-uncased-ner"
90
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
91
+ model = AutoModelForTokenClassification.from_pretrained(model_name)
92
+
93
+ ```
94
  ## Training procedure
95
 
96
  ### Training hyperparameters
 
119
  - Transformers 4.41.2
120
  - Pytorch 2.3.0+cu121
121
  - Datasets 2.19.2
122
+ - Tokenizers 0.19.1