Steelskull commited on
Commit
9fb44f1
1 Parent(s): a1c9352

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +138 -198
README.md CHANGED
@@ -1,200 +1,140 @@
1
  ---
2
- library_name: transformers
3
- tags:
4
- - llama-factory
5
  ---
6
-
7
- # Model Card for Model ID
8
-
9
- <!-- Provide a quick summary of what the model is/does. -->
10
-
11
-
12
-
13
- ## Model Details
14
-
15
- ### Model Description
16
-
17
- <!-- Provide a longer summary of what this model is. -->
18
-
19
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
20
-
21
- - **Developed by:** [More Information Needed]
22
- - **Funded by [optional]:** [More Information Needed]
23
- - **Shared by [optional]:** [More Information Needed]
24
- - **Model type:** [More Information Needed]
25
- - **Language(s) (NLP):** [More Information Needed]
26
- - **License:** [More Information Needed]
27
- - **Finetuned from model [optional]:** [More Information Needed]
28
-
29
- ### Model Sources [optional]
30
-
31
- <!-- Provide the basic links for the model. -->
32
-
33
- - **Repository:** [More Information Needed]
34
- - **Paper [optional]:** [More Information Needed]
35
- - **Demo [optional]:** [More Information Needed]
36
-
37
- ## Uses
38
-
39
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
40
-
41
- ### Direct Use
42
-
43
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
44
-
45
- [More Information Needed]
46
-
47
- ### Downstream Use [optional]
48
-
49
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
50
-
51
- [More Information Needed]
52
-
53
- ### Out-of-Scope Use
54
-
55
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
56
-
57
- [More Information Needed]
58
-
59
- ## Bias, Risks, and Limitations
60
-
61
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
62
-
63
- [More Information Needed]
64
-
65
- ### Recommendations
66
-
67
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
68
-
69
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
70
-
71
- ## How to Get Started with the Model
72
-
73
- Use the code below to get started with the model.
74
-
75
- [More Information Needed]
76
-
77
- ## Training Details
78
-
79
- ### Training Data
80
-
81
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
82
-
83
- [More Information Needed]
84
-
85
- ### Training Procedure
86
-
87
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
88
-
89
- #### Preprocessing [optional]
90
-
91
- [More Information Needed]
92
-
93
-
94
- #### Training Hyperparameters
95
-
96
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
97
-
98
- #### Speeds, Sizes, Times [optional]
99
-
100
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
101
-
102
- [More Information Needed]
103
-
104
- ## Evaluation
105
-
106
- <!-- This section describes the evaluation protocols and provides the results. -->
107
-
108
- ### Testing Data, Factors & Metrics
109
-
110
- #### Testing Data
111
-
112
- <!-- This should link to a Dataset Card if possible. -->
113
-
114
- [More Information Needed]
115
-
116
- #### Factors
117
-
118
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
119
-
120
- [More Information Needed]
121
-
122
- #### Metrics
123
-
124
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
125
-
126
- [More Information Needed]
127
-
128
- ### Results
129
-
130
- [More Information Needed]
131
-
132
- #### Summary
133
-
134
-
135
-
136
- ## Model Examination [optional]
137
-
138
- <!-- Relevant interpretability work for the model goes here -->
139
-
140
- [More Information Needed]
141
-
142
- ## Environmental Impact
143
-
144
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
145
-
146
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
147
-
148
- - **Hardware Type:** [More Information Needed]
149
- - **Hours used:** [More Information Needed]
150
- - **Cloud Provider:** [More Information Needed]
151
- - **Compute Region:** [More Information Needed]
152
- - **Carbon Emitted:** [More Information Needed]
153
-
154
- ## Technical Specifications [optional]
155
-
156
- ### Model Architecture and Objective
157
-
158
- [More Information Needed]
159
-
160
- ### Compute Infrastructure
161
-
162
- [More Information Needed]
163
-
164
- #### Hardware
165
-
166
- [More Information Needed]
167
-
168
- #### Software
169
-
170
- [More Information Needed]
171
-
172
- ## Citation [optional]
173
-
174
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
175
-
176
- **BibTeX:**
177
-
178
- [More Information Needed]
179
-
180
- **APA:**
181
-
182
- [More Information Needed]
183
-
184
- ## Glossary [optional]
185
-
186
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
187
-
188
- [More Information Needed]
189
-
190
- ## More Information [optional]
191
-
192
- [More Information Needed]
193
-
194
- ## Model Card Authors [optional]
195
-
196
- [More Information Needed]
197
-
198
- ## Model Card Contact
199
-
200
- [More Information Needed]
 
1
  ---
2
+ license: llama3
3
+ datasets:
4
+ - TheSkullery/Aether-Lite-V1.2
5
  ---
6
+ <!DOCTYPE html>
7
+ <style>
8
+ body {
9
+ font-family: 'Quicksand', sans-serif;
10
+ background: linear-gradient(135deg, #2E3440 0%, #1A202C 100%);
11
+ color: #D8DEE9;
12
+ margin: 0;
13
+ padding: 0;
14
+ font-size: 16px;
15
+ }
16
+
17
+ .container {
18
+ width: 80% auto;
19
+ max-width: 1080px auto;
20
+ margin: 20px auto;
21
+ background-color: rgba(255, 255, 255, 0.02);
22
+ padding: 20px;
23
+ border-radius: 12px;
24
+ box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2);
25
+ backdrop-filter: blur(10px);
26
+ border: 1px solid rgba(255, 255, 255, 0.1);
27
+ }
28
+
29
+ .header h1 {
30
+ font-size: 28px;
31
+ color: #ECEFF4;
32
+ margin: 0 0 20px 0;
33
+ text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3);
34
+ }
35
+
36
+ .update-section {
37
+ margin-top: 30px;
38
+ }
39
+
40
+ .update-section h2 {
41
+ font-size: 24px;
42
+ color: #88C0D0;
43
+ }
44
+
45
+ .update-section p {
46
+ font-size: 16px;
47
+ line-height: 1.6;
48
+ color: #ECEFF4;
49
+ }
50
+
51
+ .info img {
52
+ width: 100%;
53
+ border-radius: 10px;
54
+ margin-bottom: 15px;
55
+ }
56
+
57
+ a {
58
+ color: #88C0D0;
59
+ text-decoration: none;
60
+ }
61
+
62
+ a:hover {
63
+ color: #A3BE8C;
64
+ }
65
+
66
+ .button {
67
+ display: inline-block;
68
+ background-color: #5E81AC;
69
+ color: #E5E9F0;
70
+ padding: 10px 20px;
71
+ border-radius: 5px;
72
+ cursor: pointer;
73
+ text-decoration: none;
74
+ }
75
+
76
+ .button:hover {
77
+ background-color: #81A1C1;
78
+ }
79
+
80
+ pre {
81
+ background-color: #2E3440;
82
+ padding: 10px;
83
+ border-radius: 5px;
84
+ overflow-x: auto;
85
+ }
86
+
87
+ code {
88
+ font-family: 'Courier New', monospace;
89
+ color: #D8DEE9;
90
+ }
91
+
92
+ </style>
93
+ <html lang="en">
94
+ <head>
95
+ <meta charset="UTF-8">
96
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
97
+ <title>L3-NA-Aethora-15B Data Card</title>
98
+ <link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
99
+ </head>
100
+ <body>
101
+ <div class="container">
102
+ <div class="header">
103
+ <h1>L3-NA-Aethora-15B</h1>
104
+ </div>
105
+ <div class="info">
106
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/W0qzZK_V1Zt1GdgCIsnrP.png">
107
+ <p>The Skullery Presents L3-NA-Aethora-15B.</p>
108
+ <p><strong>Creator:</strong> <a href="https://huggingface.co/steelskull" target="_blank">Steelskull</a></p>
109
+ <p><strong>Dataset:</strong> <a href="https://huggingface.co/datasets/TheSkullery/Aether-Lite-V1.2" target="_blank">Aether-Lite-V1.2</a></p>
110
+ <p><strong>Trained:</strong> 4 x A100 for 15 hours Using RsLora and DORA</p>
111
+ <h1>About L3-NA-Aethora-15B:</h1>
112
+ <pre><code>L3 = Llama3
113
+ NA = NON-ABLITERATED
114
+ </code></pre>
115
+ <p>L3-NA-Aethora-15B was crafted by using a modified DUS (Depth Up Scale) merge (originally used by @Elinas) by using passthrough merge to create a 15b model, with specific adjustments (zeroing) to 'o_proj' and 'down_proj', enhancing its efficiency and reducing perplexity. This created Meta-Llama-3-15b-Instruct.<br>
116
+ <p>Meta-Llama-3-15b-Instruct was then trained for 4 epochs using Rslora & DORA training methods on the Aether-Lite-V1.2 dataset, containing ~82000 high quality samples, designed to strike a fine balance between creativity, slop, and intelligence at about a 60/40 split</p>
117
+ <p>This model is trained on the L3 prompt format.</p>
118
+ <p></p>
119
+ <p><strong>This is the NON-Abliterated VERSION and a TEST Model</strong></p>
120
+ <h2>Quants:</h2>
121
+ <p></p>
122
+ <h2>Dataset Summary: (Filtered)</h2>
123
+ <p>Filtered Phrases: GPTslop, Claudism's</p>
124
+ <ul>
125
+ <li><strong>mrfakename/Pure-Dove-ShareGPT:</strong> Processed 3707, Removed 150</li>
126
+ <li><strong>mrfakename/Capybara-ShareGPT:</strong> Processed 13412, Removed 2594</li>
127
+ <li><strong>jondurbin/airoboros-3.2:</strong> Processed 54517, Removed 4192</li>
128
+ <li><strong>PJMixers/grimulkan_theory-of-mind-ShareGPT:</strong> Processed 533, Removed 6</li>
129
+ <li><strong>grimulkan/PIPPA-augmented-dedup:</strong> Processed 869, Removed 46</li>
130
+ <li><strong>grimulkan/LimaRP-augmented:</strong> Processed 790, Removed 14</li>
131
+ <li><strong>PJMixers/grimulkan_physical-reasoning-ShareGPT:</strong> Processed 895, Removed 4</li>
132
+ <li><strong>MinervaAI/Aesir-Preview:</strong> Processed 994, Removed 6</li>
133
+ <li><strong>Doctor-Shotgun/no-robots-sharegpt:</strong> Processed 9911, Removed 89</li>
134
+ </ul>
135
+ <h2>Deduplication Stats:</h2>
136
+ <p>Starting row count: 85628, Final row count: 81960, Rows removed: 3668</p>
137
+ </div>
138
+ </div>
139
+ </body>
140
+ </html>