Steelskull commited on
Commit
167f120
1 Parent(s): 1db424f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -110,6 +110,7 @@ code {
110
  <p>The Skullery Presents L3-Aethora-15B.</p>
111
  <p><strong>Creator:</strong> <a href="https://huggingface.co/steelskull" target="_blank">Steelskull</a></p>
112
  <p><strong>Dataset:</strong> <a href="https://huggingface.co/datasets/TheSkullery/Aether-Lite-V1.2" target="_blank">Aether-Lite-V1.2</a></p>
 
113
  <h1>About L3-Aethora-15B:</h1>
114
  <p>L3-Aethora-15B was crafted through using the abilteration method to adjust model responses. The model's refusal is inhibited, focusing on yielding more compliant and facilitative dialogue interactions. It underwent a specialized merging process (originally used by @Elias) by using passthrough merge to create a 15b model, with specific adjustments to 'o_proj' and 'down_proj' settings, enhancing its efficiency and reducing perplexity. This created AbL3In-15b.<br>
115
  <p>AbL3In-15b was then fine-tuned on the Aether-Lite-V1 dataset, containing ~82000 high quality samples, designed to strike a fine balance between creativity, slop, and intelligence at about a 60/40 split</p>
 
110
  <p>The Skullery Presents L3-Aethora-15B.</p>
111
  <p><strong>Creator:</strong> <a href="https://huggingface.co/steelskull" target="_blank">Steelskull</a></p>
112
  <p><strong>Dataset:</strong> <a href="https://huggingface.co/datasets/TheSkullery/Aether-Lite-V1.2" target="_blank">Aether-Lite-V1.2</a></p>
113
+ <p><strong>Trained:</strong> 4 x A100 for 15 hours</p>
114
  <h1>About L3-Aethora-15B:</h1>
115
  <p>L3-Aethora-15B was crafted through using the abilteration method to adjust model responses. The model's refusal is inhibited, focusing on yielding more compliant and facilitative dialogue interactions. It underwent a specialized merging process (originally used by @Elias) by using passthrough merge to create a 15b model, with specific adjustments to 'o_proj' and 'down_proj' settings, enhancing its efficiency and reducing perplexity. This created AbL3In-15b.<br>
116
  <p>AbL3In-15b was then fine-tuned on the Aether-Lite-V1 dataset, containing ~82000 high quality samples, designed to strike a fine balance between creativity, slop, and intelligence at about a 60/40 split</p>