MaziyarPanahi commited on
Commit
2fb8a7e
1 Parent(s): 5c76360

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: transformers
5
+ tags:
6
+ - chat
7
+ - qwen
8
+ - qwen2
9
+ - finetune
10
+ - chatml
11
+ base_model: dnhkng/RYS-XLarge
12
+ datasets:
13
+ - MaziyarPanahi/truthy-dpo-v0.1-axolotl
14
+ model_name: calme-2.4-rys-78b
15
+ pipeline_tag: text-generation
16
+ inference: false
17
+ model_creator: MaziyarPanahi
18
+ quantized_by: MaziyarPanahi
19
+ license: mit
20
+ ---
21
+
22
+ <img src="./calme-2.webp" alt="Calme-2 Models" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
23
+
24
+ # MaziyarPanahi/calme-2.4-rys-78b
25
+
26
+ This model is a fine-tuned version of the `dnhkng/RYS-XLarge`, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
27
+
28
+ ## Use Cases
29
+
30
+ This model is suitable for a wide range of applications, including but not limited to:
31
+
32
+ - Advanced question-answering systems
33
+ - Intelligent chatbots and virtual assistants
34
+ - Content generation and summarization
35
+ - Code generation and analysis
36
+ - Complex problem-solving and decision support
37
+
38
+ # ⚡ Quantized GGUF
39
+
40
+ Coming soon!
41
+
42
+
43
+ # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
44
+
45
+ Coming soon!
46
+
47
+ # Prompt Template
48
+
49
+ This model uses `ChatML` prompt template:
50
+
51
+ ```
52
+ <|im_start|>system
53
+ {System}
54
+ <|im_end|>
55
+ <|im_start|>user
56
+ {User}
57
+ <|im_end|>
58
+ <|im_start|>assistant
59
+ {Assistant}
60
+ ````
61
+
62
+ # How to use
63
+
64
+
65
+ ```python
66
+
67
+ # Use a pipeline as a high-level helper
68
+
69
+ from transformers import pipeline
70
+
71
+ messages = [
72
+ {"role": "user", "content": "Who are you?"},
73
+ ]
74
+ pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.4-rys-78b")
75
+ pipe(messages)
76
+
77
+
78
+ # Load model directly
79
+
80
+ from transformers import AutoTokenizer, AutoModelForCausalLM
81
+
82
+ tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.4-rys-78b")
83
+ model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.4-rys-78b")
84
+ ```
85
+
86
+
87
+ # Ethical Considerations
88
+
89
+ As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.