Edit model card

We trained a Chinese version of Shepherd based on Chinese-LLaMA-2-7B, and we used 2 V100 GPUs with 32G for supervised fine-tuning based on LoRA.

We designed the appropriate prompt template, and the dataset we used has been published in this HuggingFace repository: frankminors123/chinese-shepherd-critic-dataset, please go to the data page to view details.

The prompt template used is as follows:

PROMPT_TEMPLATE = (
    "请试着评论下面问题的答案.\n"
    "### 问题:\n{question}\n### 答案:\n{answer}\n### 评论:\n"
)
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train frankminors123/Chinese-Shepherd