xinchen9 commited on
Commit
5762dfe
1 Parent(s): 8b5f4ae

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -3
README.md CHANGED
@@ -1,3 +1,28 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ ### 1. Model Details
4
+ Introducing xinchen9/Llama3.1_8B_Instruct_CoT, an advanced language model comprising 8 billion parameters. It has been fine-trained based on
5
+ [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
6
+
7
+ The llama3-b8 model was fine-tuning on dataset [CoT_Collection](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
8
+
9
+ The training step is 30,000. The batch of each device is 8 and toal GPU is 5.
10
+ Learning Rate: 0.0003
11
+
12
+ ### 2. How to Use
13
+ Here give some examples of how to use our model.
14
+ #### Text Completion
15
+ ```python
16
+ import torch
17
+ from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
18
+
19
+ model_name = "xinchen9/Llama3.1_8B_Instruct_CoT"
20
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
21
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
22
+ model.generation_config = GenerationConfig.from_pretrained(model_name)
23
+ model.generation_config.pad_token_id = model.generation_config.eos_token_id
24
+ ```
25
+ ### 3 Disclaimer
26
+ The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
27
+
28
+