--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-base-ontonotesv5-en results: [] --- # xlm-roberta-base-ontonotesv5-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [conll2012_ontonotesv5](https://huggingface.co/datasets/conll2012_ontonotesv5/viewer/english_v4/train) dataset. It achieves the following results on the evaluation set: - Loss: 0.1381 - Precision: 0.8637 - Recall: 0.8785 - F1: 0.8710 - Accuracy: 0.9804 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0787 | 1.0 | 2350 | 0.0831 | 0.8119 | 0.8611 | 0.8358 | 0.9765 | | 0.0565 | 2.0 | 4700 | 0.0756 | 0.8513 | 0.8708 | 0.8609 | 0.9794 | | 0.0415 | 3.0 | 7050 | 0.0763 | 0.8530 | 0.8739 | 0.8633 | 0.9801 | | 0.0347 | 4.0 | 9400 | 0.0820 | 0.8558 | 0.8810 | 0.8682 | 0.9804 | | 0.0252 | 5.0 | 11750 | 0.0913 | 0.8683 | 0.8607 | 0.8645 | 0.9791 | | 0.0201 | 6.0 | 14100 | 0.0923 | 0.86 | 0.8763 | 0.8681 | 0.9804 | | 0.0172 | 7.0 | 16450 | 0.1023 | 0.8617 | 0.8788 | 0.8702 | 0.9800 | | 0.0118 | 8.0 | 18800 | 0.1083 | 0.8579 | 0.8756 | 0.8667 | 0.9799 | | 0.0101 | 9.0 | 21150 | 0.1162 | 0.8583 | 0.8766 | 0.8674 | 0.9803 | | 0.009 | 10.0 | 23500 | 0.1189 | 0.8623 | 0.8772 | 0.8697 | 0.9804 | | 0.0074 | 11.0 | 25850 | 0.1259 | 0.8642 | 0.8757 | 0.8699 | 0.9804 | | 0.0053 | 12.0 | 28200 | 0.1303 | 0.8601 | 0.8765 | 0.8682 | 0.9800 | | 0.0046 | 13.0 | 30550 | 0.1345 | 0.8619 | 0.8755 | 0.8686 | 0.9799 | | 0.004 | 14.0 | 32900 | 0.1381 | 0.8637 | 0.8785 | 0.8710 | 0.9804 | | 0.0029 | 15.0 | 35250 | 0.1405 | 0.8616 | 0.8788 | 0.8701 | 0.9803 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2 ## Citation If you used the datasets and models in this repository, please cite it. ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.09611, doi = {10.48550/ARXIV.2302.09611}, url = {https://arxiv.org/abs/2302.09611}, author = {Sartipi, Amir and Fatemi, Afsaneh}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```