Ichsan2895 commited on
Commit
109bb76
1 Parent(s): 2cc2f52

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +156 -0
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - wikipedia
4
+ language:
5
+ - id
6
+ - en
7
+ pipeline_tag: text-generation
8
+ ---
9
+
10
+ # THIS IS 5th PROTOTYPE OF MERAK-7B-v2!
11
+
12
+ Merak-7B is the Large Language Model of Indonesia Languange
13
+
14
+ This model is based on Meta Llama-2-7B-Chat-HF and fine tuned by some of Indonesia Wikipedia articles that I cleaned before.
15
+
16
+ Leveraging QLoRA (QLora: Efficient Finetuning of Quantized LLMs), Merak-7B is able to run with 16 GB VRAM
17
+
18
+ Licensed under Creative Commons-By Attribution-Share Alike-Non Commercial (CC-BY-SA-NC 4.0) Merak-7B empowers AI enthusiasts, researchers alike.
19
+
20
+ Big thanks to all my friends and communities that help to build our first model. Feel free, to ask me about the model and please share the news on your social media.
21
+
22
+ ## HOW TO USE
23
+ ### Installation
24
+ Please make sure you have installed CUDA driver in your system, Python 3.10 and PyTorch 2. Then install this library in terminal
25
+ ```
26
+ pip install bitsandbytes==0.39.1
27
+ pip install transformers==4.31.0
28
+ pip install git+https://github.com/huggingface/peft.git
29
+ pip install accelerate==0.20.3
30
+ pip install einops==0.6.1 scipy sentencepiece datasets
31
+ ```
32
+ ### Using BitsandBytes and it run with >= 10 GB VRAM GPU
33
+ [![Open in Google Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Cl1tO1QIYNWHR8K-nQe6xIaUvaLwxXCq?usp=sharing)
34
+ ```
35
+ import torch
36
+ from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig, LlamaTokenizer
37
+ from peft import PeftModel, PeftConfig
38
+
39
+ model_id = "Ichsan2895/Merak-7B-v2"
40
+ config = AutoConfig.from_pretrained(model_id)
41
+
42
+ BNB_CONFIG = BitsAndBytesConfig(load_in_4bit=True,
43
+ bnb_4bit_compute_dtype=torch.bfloat16,
44
+ bnb_4bit_use_double_quant=True,
45
+ bnb_4bit_quant_type="nf4",
46
+ )
47
+
48
+ model = AutoModelForCausalLM.from_pretrained(model_id,
49
+ quantization_config=BNB_CONFIG,
50
+ device_map="auto",
51
+ trust_remote_code=True)
52
+
53
+ tokenizer = LlamaTokenizer.from_pretrained(model_id)
54
+
55
+ def generate_response(question: str) -> str:
56
+ prompt = f"<|prompt|>{question}<|answer|>".strip()
57
+
58
+ encoding = tokenizer(prompt, return_tensors='pt').to("cuda")
59
+ with torch.inference_mode():
60
+ outputs = model.generate(input_ids=encoding.input_ids,
61
+ attention_mask=encoding.attention_mask,
62
+ eos_token_id=tokenizer.pad_token_id,
63
+ do_sample=False,
64
+ num_beams=2,
65
+ temperature=0.3,
66
+ repetition_penalty=1.2,
67
+ max_length=200)
68
+
69
+ response = tokenizer.decode(outputs[0], skip_special_tokes=True)
70
+
71
+ assistant_start = "<|answer|>"
72
+ response_start = response.find(assistant_start)
73
+ return response[response_start + len(assistant_start) :].strip()
74
+
75
+ prompt = "Siapa penulis naskah proklamasi kemerdekaan Indonesia?"
76
+ print(generate_response(prompt))
77
+ ```
78
+
79
+
80
+ ### From my experience, For better answer, please don’t use BitsandBytes 4-bit Quantization, but it using higher VRAM
81
+ [![Open in Google Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1uUaeI4-Zzuk0m9Xjg1Dw45YZs402EgWz?usp=sharing)
82
+ ```
83
+ import torch
84
+ from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig, LlamaTokenizer
85
+ from peft import PeftModel, PeftConfig
86
+
87
+ model_id = "Ichsan2895/Merak-7B-v2"
88
+ config = AutoConfig.from_pretrained(model_id)
89
+ model = AutoModelForCausalLM.from_pretrained(model_id,
90
+ device_map="auto",
91
+ trust_remote_code=True)
92
+
93
+ tokenizer = LlamaTokenizer.from_pretrained(model_id)
94
+
95
+ def generate_response(question: str) -> str:
96
+ prompt = f"<|prompt|>{question}<|answer|>".strip()
97
+
98
+ encoding = tokenizer(prompt, return_tensors='pt').to("cuda")
99
+ with torch.inference_mode():
100
+ outputs = model.generate(input_ids=encoding.input_ids,
101
+ attention_mask=encoding.attention_mask,
102
+ eos_token_id=tokenizer.pad_token_id,
103
+ do_sample=False,
104
+ num_beams=2,
105
+ temperature=0.3,
106
+ repetition_penalty=1.2,
107
+ max_length=200)
108
+
109
+ response = tokenizer.decode(outputs[0], skip_special_tokes=True)
110
+
111
+ assistant_start = "<|answer|>"
112
+ response_start = response.find(assistant_start)
113
+ return response[response_start + len(assistant_start) :].strip()
114
+
115
+ prompt = "Siapa penulis naskah proklamasi kemerdekaan Indonesia?"
116
+ print(generate_response(prompt))
117
+ ```
118
+
119
+ ## CHANGELOG
120
+ **v1** = The first Merak-7B model. We selected and cleaned about 200k ID wikipedia articles.
121
+ **v2** = Finetuned version of first Merak-7B model. We finetuned again with the same ID Wikipedia articles except it changes prompt-style in the questions.
122
+
123
+ ## CITATION
124
+ ```
125
+ @Paper{arXiv,
126
+ author = {Touvron, et al},
127
+ title = {Llama 2: Open Foundation and Fine-Tuned Chat Models},
128
+ journal = {arXiv preprint arXiv:2307.09288},
129
+ year = {2023}
130
+ }
131
+
132
+ @ONLINE{wikidump,
133
+ author = "Wikimedia Foundation",
134
+ title = "Wikimedia Downloads",
135
+ url = "https://dumps.wikimedia.org"
136
+ }
137
+
138
+ @inproceedings{wolf-etal-2020-transformers,
139
+ title = "Transformers: State-of-the-Art Natural Language Processing",
140
+ author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
141
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
142
+ month = oct,
143
+ year = "2020",
144
+ address = "Online",
145
+ publisher = "Association for Computational Linguistics",
146
+ url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
147
+ pages = "38--45"
148
+ }
149
+
150
+ @article{dettmers2023qlora,
151
+ title = {QLoRA: Efficient Finetuning of Quantized LLMs},
152
+ author = {Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
153
+ journal = {arXiv preprint arXiv:2305.14314},
154
+ year = {2023}
155
+ }
156
+ ```