--- license: apache-2.0 datasets: - Flmc/DISC-Med-SFT language: - zh tags: - medical --- This repository contains the DISC-MedLLM, version of Baichuan-13b-base as the base model. **Please note that due to the ongoing development of the project, the model weights in this repository may differ from those in our currently deployed demo.** Check [DISC-MedLLM](https://github.com/FudanDISC/DISC-MedLLM) for more information. # DISC-MedLLM [**Demo**](http://med.fudan-disc.com) | [**Tech Report**](https://arxiv.org/abs/2308.14346) This is the repo of DISC-MedLLM, a medical domain-specific LLM designed for conversational healthcare scenarios by [Fudan-DISC](http://fudan-disc.com) lab. The following resources have been released: * DISC-Med-SFT Dataset (with out behavioral preference dataset) * Model [weights](https://huggingface.co/Flmc/DISC-MedLLM) of DISC-MedLLM You can check this [link](http://medllm.fudan-disc.com) to try our online demo. ## Overview The DISC-MedLLM is a large-scale domain-specific model designed for conversational healthcare scenarios. It can address a variety of your needs, including medical consultations and treatment inquiries, offering you high-quality health support services. The DISC-MedLLM effectively bridges the gap between general language models and real-world medical consultations, as evidenced by experimental results. Owing to our goal-oriented strategy and the framework that integrates both LLM and Human in the loop based on real-world doctor-patient dialogues and knowledge graphs, DISC-MedLLM boasts several features: * **Knowledge-intensive and reliable** * **Ability of multi-turn inquiry** * **Alignment with human preferences** ## Dataset To train DISC-MedLLM, we construct a high-quality dataset called DISC-Med-SFT consisting of over 470k distinct examples derived from existing medical datasets. We adopt a goal-oriented strategy by selectively reconstructing the dataset using a few deliberately chosen sources. These data sources serve the purpose of assisting LLMs in acquiring medical domain knowledge, aligning behavioral patterns with human preferences, and capturing real-world online medical dialogue distributions.

Dateset

Original Source

Size
Re-constructed AI Doctor-Patient Dialogue MedDialog 400k
cMedQA2 20k
Knowledge Graph
QA pairs
CMeKG 50k
Behavior Preference
Dataset
Manual selection 2k
Others MedMCQA 8k
MOSS-SFT 33k
Alpaca-GPT4-zh 1k

## Deploy The current version of DISC-MedLLM is derived from the [Baichuan-13B-Base](https://github.com/baichuan-inc/Baichuan-13B). You can directly download our model weights from the HuggingFace [repository](https://huggingface.co/Flmc/DISC-MedLLM), or automatically obtain them through the demo code. ### Using through hugging face transformers ```python >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> from transformers.generation.utils import GenerationConfig >>> tokenizer = AutoTokenizer.from_pretrained("Flmc/DISC-MedLLM", use_fast=False, trust_remote_code=True) >>> model = AutoModelForCausalLM.from_pretrained("Flmc/DISC-MedLLM", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True) >>> model.generation_config = GenerationConfig.from_pretrained("Flmc/DISC-MedLLM") >>> messages = [] >>> messages.append({"role": "user", "content": "我感觉自己颈椎非常不舒服,每天睡醒都会头痛"}) >>> response = model.chat(tokenizer, messages) >>> print(response) ``` Additionally, since the current version uses Baichuan as the base model, you can refer to its [repo](https://github.com/baichuan-inc/Baichuan-13B) for deploying with int8, int4 quantized inference. However, using quantized deployment will result in performance degradation.
## Training You can fine-tuning our model using the data same as our data schema. Our train code is derived from [Firefly](https://github.com/yangjianxin1/Firefly) with the different data schema and dialogue format. We jsut provide the code of Full Params Fine-tuning: ```shell deepspeed --num_gpus={num_gpus} ./train/train.py --train_args_file ./train/train_args/sft.json ``` > Please check the setup of `sft.json` before you attempt to start training.
If you want to fine-tuning our model with other training code, please use the following dialogue format. ```shell <\b><$user_token>content<$assistant_token>content<\s><$user_token>content ... ``` The `user_token` and `assistant_token` we used are `195` and `196`, respectly. Which is same as Baichuan-13b-Chat. ## Delcaration Due to the inherent limitations of language models, we cannot assure the accuracy or reliability of information generated by this model. This model is designed exclusively for research and testing by individuals and academic groups. We urge users to critically assess any information or medical advice obtained through the model's output. Blindly trusting or following such information is strongly discouraged. We disclaim responsibility for any issues, risks, or adverse consequences resulting from the model's use. ## Licenses The use of the source code in this repository complies with the Apache 2.0 License. ## Citation ```angular2 @misc{bao2023discmedllm, title={DISC-MedLLM: Bridging General Large Language Models and Real-World Medical Consultation}, author={Zhijie Bao and Wei Chen and Shengze Xiao and Kuang Ren and Jiaao Wu and Cheng Zhong and Jiajie Peng and Xuanjing Huang and Zhongyu Wei}, year={2023}, eprint={2308.14346}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```