pszemraj commited on
Commit
209dd79
1 Parent(s): 6fe1cc3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -48,7 +48,8 @@ model-index:
48
 
49
  # flan-t5-large-stacked-samsum1024-WIP3
50
 
51
- This model is a fine-tuned version of [pszemraj/flan-t5-large-stacked-samsum1024-WIP2](https://huggingface.co/pszemraj/flan-t5-large-stacked-samsum1024-WIP2) on the `stacked-summaries/stacked-samsum-1024` dataset.
 
52
  It achieves the following results on the evaluation set:
53
  - Loss: 2.1311
54
  - Rouge1: 58.1114
@@ -63,7 +64,8 @@ More information needed
63
 
64
  ## Intended uses & limitations
65
 
66
- More information needed
 
67
 
68
  ## Training and evaluation data
69
 
 
48
 
49
  # flan-t5-large-stacked-samsum1024-WIP3
50
 
51
+ This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the `stacked-summaries/stacked-samsum-1024` dataset.
52
+
53
  It achieves the following results on the evaluation set:
54
  - Loss: 2.1311
55
  - Rouge1: 58.1114
 
64
 
65
  ## Intended uses & limitations
66
 
67
+ - max input/output is 1024 tokens
68
+ - this is mostly a test because `samsum` is not exactly the best dataset for general purpose summarization
69
 
70
  ## Training and evaluation data
71