Crystalcareai commited on
Commit
a60f09d
1 Parent(s): dd5430b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -0
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ datasets:
6
+ - cognitivecomputations/Dolphin-2.9
7
+ - teknium/OpenHermes-2.5
8
+ - m-a-p/CodeFeedback-Filtered-Instruction
9
+ - cognitivecomputations/dolphin-coder
10
+ - cognitivecomputations/samantha-data
11
+ - microsoft/orca-math-word-problems-200k
12
+ - Locutusque/function-calling-chatml
13
+ - internlm/Agent-FLAN
14
+ ---
15
+
16
+ # Dolphin 2.9.1 Phi-3 Kensho 4.5b 🐬
17
+
18
+ Curated and trained by Eric Hartford, Lucas Atkins, Fernando Fernandes, and with help from the community of Cognitive Computations
19
+
20
+ Discord: https://discord.gg/8fbBeC7ZGx
21
+
22
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
23
+
24
+ Our appreciation for the sponsors of Dolphin 2.9:
25
+ - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40Snode
26
+
27
+ This model utilizes PEFT layer replication at inference time to duplicate layers and increase parameter count. This works with both the merged model that comes stock with this repository,
28
+ and the adapter that is attached as well. Performance will be similar with both methods, but VRAM use is considerably less when using the adapter.
29
+ This model was initialized using [Unsloth's Mistralfied Phi-3-Instruct-4k](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct). If you choose to use the adapter method, please attach it their model.
30
+
31
+ This model is based on Phi-3-Mini-Instruct-4k, and is governed by the MIT license in which Microsoft released Phi-3.
32
+
33
+ The base model has 4k context, and the qLoRA fine-tuning was with 4k sequence length.
34
+
35
+ It took 2.5 days on 8xL40S node provided by Crusoe Cloud
36
+
37
+ This model uses ChatML prompt template format.
38
+
39
+ example:
40
+
41
+ ```
42
+ <|im_start|>system
43
+ You are Dolphin, a helpful AI assistant.<|im_end|>
44
+ <|im_start|>user
45
+ {prompt}<|im_end|>
46
+ <|im_start|>assistant
47
+
48
+ ```
49
+
50
+ Dolphin-2.9.1 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
51
+
52
+ Dolphin-Phi-Kensho is mostly uncensored. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
53
+
54
+ Dolphin is licensed according to the MIT license. I grant permission for any use, including commercial. Dolphin was trained on data generated from GPT4, among other models.
55
+
56
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
57
+
58
+
59
+