xinchen9's picture
Upload README.md
06ad322 verified
|
raw
history blame
No virus
1.29 kB

license: apache-2.0

1. Model Details

Introducing xinchen9/Llama3.1_8B_Instruct_CoT, an advanced language model comprising 8 billion parameters. It has been fine-trained based on meta-llama/Meta-Llama-3.1-8B-Instruct.

The llama3-b8 model was fine-tuning on dataset CoT_Collection.

The training step is 30,000. The batch of each device is 8 and toal GPU is 5. Learning Rate: 0.0003

2. How to Use

Here give some examples of how to use our model.

Text Completion

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_name = "xinchen9/Llama3.1_8B_Instruct_CoT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id

3 Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.