bofenghuang commited on
Commit
e6d2a8e
1 Parent(s): d864050
Files changed (2) hide show
  1. README.md +8 -11
  2. adapter_config.json +1 -1
README.md CHANGED
@@ -14,12 +14,12 @@ inference: false
14
  ---
15
 
16
  <p align="center" width="100%">
17
- <img src="https://huggingface.co/bofenghuang/vigogne-lora-7b/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;">
18
  </p>
19
 
20
- # Vigogne-LoRA-7b: A French Instruct LLaMA Model
21
 
22
- Vigogne-LoRA-7b is a LLaMA-7B model fine-tuned on the translated [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset to follow the 🇫🇷 French instructions.
23
 
24
  For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne
25
 
@@ -33,13 +33,14 @@ This repo only contains the low-rank adapter. In order to access the complete mo
33
  from peft import PeftModel
34
  from transformers import LlamaForCausalLM, LlamaTokenizer
35
 
36
- base_model_name_or_path = "<name/or/path/to/hf/llama/7b/model>"
37
- lora_model_name_or_path = "bofenghuang/vigogne-lora-7b"
38
 
39
- tokenizer = LlamaTokenizer.from_pretrained(base_model_name_or_path)
40
  model = LlamaForCausalLM.from_pretrained(
41
  base_model_name_or_path,
42
  load_in_8bit=True,
 
43
  device_map="auto",
44
  )
45
  model = PeftModel.from_pretrained(model, lora_model_name_or_path)
@@ -47,12 +48,8 @@ model = PeftModel.from_pretrained(model, lora_model_name_or_path)
47
 
48
  You can infer this model by using the following Google Colab Notebook.
49
 
50
- <a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/infer.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
51
 
52
  ## Limitations
53
 
54
  Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
55
-
56
- ## Next Steps
57
-
58
- - Add output examples
 
14
  ---
15
 
16
  <p align="center" width="100%">
17
+ <img src="https://huggingface.co/bofenghuang/vigogne-instruct-7b/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;">
18
  </p>
19
 
20
+ # Vigogne-instruct-7b: A French Instruction-following LLaMA Model
21
 
22
+ Vigogne-instruct-7b is a LLaMA-7B model fine-tuned to follow the 🇫🇷 French instructions.
23
 
24
  For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne
25
 
 
33
  from peft import PeftModel
34
  from transformers import LlamaForCausalLM, LlamaTokenizer
35
 
36
+ base_model_name_or_path = "name/or/path/to/hf/llama/7b/model"
37
+ lora_model_name_or_path = "bofenghuang/vigogne-instruct-7b"
38
 
39
+ tokenizer = LlamaTokenizer.from_pretrained(base_model_name_or_path, padding_side="right", use_fast=False))
40
  model = LlamaForCausalLM.from_pretrained(
41
  base_model_name_or_path,
42
  load_in_8bit=True,
43
+ torch_dtype=torch.float16,
44
  device_map="auto",
45
  )
46
  model = PeftModel.from_pretrained(model, lora_model_name_or_path)
 
48
 
49
  You can infer this model by using the following Google Colab Notebook.
50
 
51
+ <a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_instruct.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
52
 
53
  ## Limitations
54
 
55
  Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
 
 
 
 
adapter_config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "base_model_name_or_path": "decapoda-research/llama-7b-hf",
3
  "bias": "none",
4
  "enable_lora": null,
5
  "fan_in_fan_out": false,
 
1
  {
2
+ "base_model_name_or_path": "hf_models/llama-7b-hf",
3
  "bias": "none",
4
  "enable_lora": null,
5
  "fan_in_fan_out": false,