File size: 5,579 Bytes
8be371b
 
 
 
 
 
 
 
d9cc60d
 
affbcec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8be371b
 
 
 
d9cc60d
8be371b
a6eab1f
6cea809
27f3f00
8be371b
 
 
 
 
 
9e142e5
 
 
 
 
 
 
8be371b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9e142e5
 
 
 
 
8be371b
9e142e5
8be371b
 
 
 
 
 
 
 
 
 
 
 
 
9e142e5
8be371b
 
 
 
 
 
9e142e5
8be371b
 
d2d3048
 
10b88e2
d2d3048
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8be371b
 
 
6cea809
 
 
 
 
 
 
 
8be371b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
---
license: apache-2.0
language:
- en
tags:
- retrieval
- instructions
- reranking
datasets:
- jhu-clsp/FollowIR-train
model-index:
  - name: FollowIR-7B
    results:
      - task:
          type: Reranking
        dataset:
          type: jhu-clsp/news21-instructions
          name: FollowIR New21
          config: en
          split: test
        metrics:
          - type: map
            value: 25.7
          - type: p-MRR
            value: 10.8
      - task:
          type: Reranking
        dataset:
          type: jhu-clsp/robust04-instructions
          name: FollowIR Robust04
          config: en
          split: test
        metrics:
          - type: map
            value: 25.9
          - type: p-MRR
            value: 13.6
      - task:
          type: Reranking
        dataset:
          type: jhu-clsp/core17-instructions
          name: FollowIR Core17
          config: en
          split: test
        metrics:
          - type: map
            value: 20.0
          - type: p-MRR
            value: 16.3
---

# Model Summary

FollowIR-7B is an instruction-tuned language model to be used for reranking in retrieval. It is [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) fine-tuned on retrieval data with instructions from the FollowIR dataset. These instructions were taken from TREC tracks and are human written. FollowIR-7B outperforms all other retrieval models at following instructions. See the paper for more details.

- **Repository:** [orionw/FollowIR](https://github.com/orionw/FollowIR)
- **Paper:** https://arxiv.org/abs/2403.15246 
- **Instruction-Training Dataset:** [jhu-clsp/followir-training-set](https://huggingface.co/datasets/jhu-clsp/FollowIR-training-set)


# Use

Below is an example to compute the similarity score of a query-document pair
```python
from transformers import (
    AutoTokenizer,
    AutoModelForCausalLM,
)
import torch

# model loading and setup
model_name = "jhu-clsp/FollowIR-7B"
model = AutoModelForCausalLM.from_pretrained(
    model_name
).cuda()
tokenizer = AutoTokenizer.from_pretrained(
    model_name, padding_side="left"
)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"
token_false_id = tokenizer.get_vocab()["false"]
token_true_id = tokenizer.get_vocab()["true"]
template = """<s> [INST] You are an expert Google searcher, whose job is to determine if the following document is relevant to the query (true/false). Answer using only one word, one of those two choices.

Query: {query}
Document: {text}
Relevant (only output one word, either "true" or "false"): [/INST] """


## Lets define some example queries with instructions in the query and the passage
query1 = "What movies were written by James Cameron? A relevant document would describe a movie that was written by James Cameron only and not with anyone else"
query2 = "What movies were directed by James Cameron? A relevant document would describe any movie that was directed by James Cameron"
passages = ["Avatar: The Way of Water is a 2022 American epic science fiction film co-produced and directed by James Cameron, who co-wrote the screenplay with Rick Jaffa and Amanda Silver from a story the trio wrote with Josh Friedman and Shane Salerno. Distributed by 20th Century Studios, it is the sequel to Avatar (2009) and the second installment in the Avatar film series."] * 2

prompts = [
    template.format(query=query, text=text) for (query, text) in zip([query1, query2], passages)
]
tokens = tokenizer(
    prompts,
    padding=True,
    truncation=True,
    return_tensors="pt",
    pad_to_multiple_of=None,
)

# move to cuda if desired
for key in tokens:
    tokens[key] = tokens[key].cuda()

# calculate the scores by comparing true and false tokens
batch_scores = model(**tokens).logits[:, -1, :]
true_vector = batch_scores[:, token_true_id]
false_vector = batch_scores[:, token_false_id]
batch_scores = torch.stack([false_vector, true_vector], dim=1)
batch_scores = torch.nn.functional.log_softmax(batch_scores, dim=1)
scores = batch_scores[:, 1].exp().tolist()
print(scores) # [0.0020704232156276703, 0.9999990463256836] first document is not relevant, as expected
```

# Training

We used [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) to fine-tune Mistral to create FollowIR-7B, after transforming it to fit their format (input of "query" + "instruction" inside the template, output is the label, and instruction as the beginning of the template) with the following training script:
```bash
#!/bin/bash
accelerate launch src/train_bash.py \
    --stage sft \
    --do_train \
    --model_name_or_path "mistralai/Mistral-7B-Instruct-v0.2" \
    --dataset followIR-train \
    --template mistral \
    --output_dir OUTPUT \
    --finetuning_type lora \
    --lora_target q_proj,v_proj,o_proj,k_proj \
    --overwrite_cache \
    --per_device_train_batch_size 32 \
    --gradient_accumulation_steps 1 \
    --lr_scheduler_type cosine \
    --logging_steps 2 \
    --save_steps 29 \
    --learning_rate 3e-5 \
    --num_train_epochs 8.0 \
    --plot_loss \
    --max_length 2048 \
    --lora_rank 8 \
    --lora_alpha 16 \
    --bf16 
```

# Citation

```bibtex
@misc{weller2024followir,
      title={FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions}, 
      author={Orion Weller and Benjamin Chang and Sean MacAvaney and Kyle Lo and Arman Cohan and Benjamin Van Durme and Dawn Lawrie and Luca Soldaini},
      year={2024},
      eprint={2403.15246},
      archivePrefix={arXiv},
      primaryClass={cs.IR}
}
```