mrm8488 commited on
Commit
0f4ab11
1 Parent(s): 5ce6317

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -0
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ model-index:
5
+ - name: PomeranIAn
6
+ results: []
7
+ license: apache-2.0
8
+ language:
9
+ - code
10
+ thumbnail: >-
11
+ https://huggingface.co/mrm8488/pomeranian/resolve/main/pomeranian-removebg-preview.png
12
+ ---
13
+
14
+ <div style="text-align:center;width:250px;height:250px;">
15
+ <img src="https://huggingface.co/mrm8488/pomeranian/resolve/main/pomeranian-removebg-preview.png" alt="pomeranian logo"">
16
+ </div>
17
+
18
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
+ should probably proofread and complete it, then remove this comment. -->
20
+
21
+ # FalCoder
22
+ **Falcon-7b** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.
23
+
24
+ ## Model description
25
+
26
+ [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b)
27
+
28
+ ## Dataset
29
+
30
+ [CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K)
31
+
32
+ ## Intended uses & limitations
33
+
34
+ TBA
35
+
36
+ ## Training and evaluation data
37
+
38
+ TBA
39
+
40
+ ### Training hyperparameters
41
+
42
+ TBA
43
+
44
+ ### Training results
45
+
46
+ TBA
47
+
48
+
49
+ ### Example of usage
50
+ ```py
51
+ import torch
52
+ from transformers import AutoModelForCausalLM, AutoTokenizer, AutoTokenizer
53
+
54
+ model_id = "mrm8488/falcoder-7b"
55
+
56
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
57
+
58
+ model = AutoModelForCausalLM.from_pretrained(model_id).to("cuda")
59
+
60
+ def generate(
61
+ instruction,
62
+ max_new_tokens=128,
63
+ temperature=0.1,
64
+ top_p=0.75,
65
+ top_k=40,
66
+ num_beams=4,
67
+ **kwargs
68
+ ):
69
+ prompt = instruction + "\n### Solution:\n"
70
+ print(prompt)
71
+ inputs = tokenizer(prompt, return_tensors="pt")
72
+ input_ids = inputs["input_ids"].to("cuda")
73
+ attention_mask = inputs["attention_mask"].to("cuda")
74
+ generation_config = GenerationConfig(
75
+ temperature=temperature,
76
+ top_p=top_p,
77
+ top_k=top_k,
78
+ num_beams=num_beams,
79
+ **kwargs,
80
+ )
81
+ with torch.no_grad():
82
+ generation_output = model.generate(
83
+ input_ids=input_ids,
84
+ attention_mask=attention_mask,
85
+ generation_config=generation_config,
86
+ return_dict_in_generate=True,
87
+ output_scores=True,
88
+ max_new_tokens=max_new_tokens,
89
+ early_stopping=True
90
+ )
91
+ s = generation_output.sequences[0]
92
+ output = tokenizer.decode(s)
93
+ return output.split("### Solution:")[1].lstrip("\n")
94
+
95
+ instruction = "Design a class for representing a person in Python."
96
+ print(generate(instruction))
97
+ ```
98
+