label-complaint / README.md
nadahlberg's picture
Model save
73de8fd verified
|
raw
history blame
No virus
2.8 kB
metadata
license: apache-2.0
base_model: docketanalyzer/docket-lm-xs
tags:
  - generated_from_trainer
metrics:
  - f1
model-index:
  - name: label-complaint
    results: []

label-complaint

This model is a fine-tuned version of docketanalyzer/docket-lm-xs on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0230
  • F1: 0.9915

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss F1
0.0112 0.0418 300 0.0576 0.9771
0.0551 0.0836 600 0.0362 0.9857
0.2331 0.1254 900 0.0354 0.9839
0.0009 0.1672 1200 0.0396 0.9868
0.005 0.2090 1500 0.0526 0.9867
0.0948 0.2508 1800 0.0434 0.9865
0.016 0.2926 2100 0.0297 0.9876
0.0047 0.3344 2400 0.0394 0.9882
0.0007 0.3763 2700 0.0422 0.9864
0.0037 0.4181 3000 0.0248 0.9910
0.002 0.4599 3300 0.0271 0.9909
0.0005 0.5017 3600 0.0283 0.9902
0.0155 0.5435 3900 0.0227 0.9910
0.0017 0.5853 4200 0.0290 0.9907
0.0002 0.6271 4500 0.0264 0.9899
0.0051 0.6689 4800 0.0294 0.9907
0.0152 0.7107 5100 0.0253 0.9903
0.0096 0.7525 5400 0.0232 0.9909
0.1812 0.7943 5700 0.0295 0.9915
0.0007 0.8361 6000 0.0235 0.9912
0.0081 0.8779 6300 0.0247 0.9910
0.0684 0.9197 6600 0.0236 0.9905
0.0003 0.9615 6900 0.0230 0.9914

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.14.4
  • Tokenizers 0.19.1