File size: 1,351 Bytes
0517e07 f1bae35 0146471 f1bae35 bf6783d f1bae35 3702956 f1bae35 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
license: llama2
---
<div align="center">
<h1>
AIMI FMs: A Collection of Foundation Models in Radiology
</h1>
</div>
<p align="center">
📝 <a href="https://arxiv.org/" target="_blank">Paper</a> • 🤗 <a href="https://huggingface.co/StanfordAIMI/RadLLaMA-7b" target="_blank">Hugging Face</a> • 🧩 <a href="https://github.com/Stanford-AIMI/aimi-fms" target="_blank">Github</a> • 🪄 <a href="https://github.com/Stanford-AIMI/aimi-fms" target="_blank">Project</a>
</p>
<div align="center">
</div>
## ✨ Latest News
- [01/20/2023]: Model released in [Hugging Face](https://huggingface.co/StanfordAIMI/RadLLaMA-7b).
## 🎬 Get Started
```python
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("StanfordAIMI/RadLLaMA-7b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("StanfordAIMI/RadLLaMA-7b")
prompt = "Hi"
conv = [{"from": "human", "value": prompt}]
input_ids = tokenizer.apply_chat_template(conv, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_ids)
response = tokenizer.decode(outputs[0])
print(response)
```
## ✏️ Citation
```
@article{aimifms-2024,
title={},
author={},
journal={arXiv preprint arXiv:xxxx.xxxxx},
url={https://arxiv.org/abs/xxxx.xxxxx},
year={2024}
}
```
|