File size: 4,797 Bytes
23fd605
40742da
2af7a77
 
 
 
 
 
 
 
 
 
 
 
9207fa5
2af7a77
 
0822361
 
 
 
2af7a77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9207fa5
2af7a77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2846647
 
 
2af7a77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9207fa5
2af7a77
 
 
 
 
 
 
9207fa5
2af7a77
 
 
 
 
9207fa5
 
 
 
 
 
 
 
 
 
2af7a77
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
---
license: cc-by-nc-4.0
datasets:
- starmpcc/Asclepius-Synthetic-Clinical-Notes
language:
- en
pipeline_tag: text2text-generation
tags:
- medical
---
# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

This is official model checkpoint for Asclepius-7B [(arxiv)](https://arxiv.org/abs/2309.00237)
This model is the first publicly shareable clinical LLM, trained with synthetic data.

## UPDATE
### 2024.01.10
- Asclepius-R, the variant of Asclepius that trained on MIMIC-III discharge summaries, is now available on [Physionet](https://physionet.org/content/asclepius-r/1.0.0/)!
- 
## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->



- **Model type:** Clinical LLM (Large Language Model)
- **Language(s) (NLP):** English
- **License:** CC-BY-NC-SA 4.0
- **Finetuned from model [optional]:** LLaMA-7B

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** https://github.com/starmpcc/Asclepius
- **Paper:** https://arxiv.org/abs/2309.00237
- **Data:** https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This model can perform below 8 clinical NLP tasks, with clincal notes.
- Named Entity Recognition
- Abbreviation Expansion
- Relation Extraction
- Temporal Information Extraction
- Coreference Resolution
- Paraphrasing
- Summarization
- Question Answering

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

[More Information Needed]

### Downstream Use [optional]

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->

[More Information Needed]

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

ONLY USE THIS MODEL FOR RESEARCH PURPOSE!!

## How to Get Started with the Model

```python
prompt = """You are an intelligent clinical languge model.
Below is a snippet of patient's discharge summary and a following instruction from healthcare professional.
Write a response that appropriately completes the instruction.
The response should provide the accurate answer to the instruction, while being concise.

[Discharge Summary Begin]
{note}
[Discharge Summary End]

[Instruction Begin]
{question}
[Instruction End] 
"""

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("starmpcc/Asclepius-7B", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("starmpcc/Asclepius-7B")

note = "This is a sample note"
question = "What is the diagnosis?"

model_input = prompt.format(note=note, question=question)
input_ids = tokenizer(model_input, return_tensors="pt").input_ids
output = model.generate(input_ids)
print(tokenizer.decode(output[0]))
```

## Training Details

### Training Data

<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes

### Training Procedure 

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- Initial training was conducted using causal language modeling on synthetic clinical notes.
- It was then fine-tuned with clinical instruction-response pairs.
- For a comprehensive overview of our methods, our upcoming paper will serve as a resource.

#### Training Hyperparameters

- We followed config used in [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- 
#### Speeds, Sizes, Times

<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
- Pre-Training (1 epoch): 1h 33m with 8x A100 80G
- Instruction Fine-Tuning (3 epoch): 7h 26m with 8x A100 80G



## Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```
@misc{kweon2023publicly,
    title={Publicly Shareable Clinical Large Language Model Built on Synthetic Clinical Notes},
    author={Sunjun Kweon and Junu Kim and Jiyoun Kim and Sujeong Im and Eunbyeol Cho and Seongsu Bae and Jungwoo Oh and Gyubok Lee and Jong Hak Moon and Seng Chan You and Seungjin Baek and Chang Hoon Han and Yoon Bin Jung and Yohan Jo and Edward Choi},
    year={2023},
    eprint={2309.00237},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
```