dev-slx commited on
Commit
ef7dbf3
β€’
1 Parent(s): 206ad85

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -22,9 +22,9 @@ _Fast Inference with Customization:_ As with our previous version, once trained,
22
 
23
  - **Github:** https://github.com/slicex-ai/elm-turbo
24
 
25
- - **HuggingFace** (access ELM Turbo Models in HF): πŸ‘‰ [here](https://huggingface.co/collections/slicexai/elm-turbo-66945032f3626024aa066fde)
26
 
27
- ## ELM Turbo Model Release
28
  In this version, we employed our new, improved decomposable ELM techniques on a widely used open-source LLM, `meta-llama/Meta-Llama-3.1-8B-Instruct` (8B params) (check [Llama-license](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE) for usage). After training, we generated three smaller slices with parameter counts ranging from 3B billion to 6B billion.
29
 
30
  - [Section 1.](https://huggingface.co/slicexai/Llama3.1-elm-turbo-4B-instruct#1-run-elm-turbo-models-with-huggingface-transformers-library) πŸ‘‰ instructions to run ELM-Turbo with the Huggingface Transformers library.
 
22
 
23
  - **Github:** https://github.com/slicex-ai/elm-turbo
24
 
25
+ - **HuggingFace** (access ELM Turbo Models in HF): πŸ‘‰ [here](https://huggingface.co/collections/slicexai/llama31-elm-turbo-66a81aa5f6bcb0b775ba5dd7)
26
 
27
+ ## ELM Turbo Model Release (Llama 3.1 slices)
28
  In this version, we employed our new, improved decomposable ELM techniques on a widely used open-source LLM, `meta-llama/Meta-Llama-3.1-8B-Instruct` (8B params) (check [Llama-license](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE) for usage). After training, we generated three smaller slices with parameter counts ranging from 3B billion to 6B billion.
29
 
30
  - [Section 1.](https://huggingface.co/slicexai/Llama3.1-elm-turbo-4B-instruct#1-run-elm-turbo-models-with-huggingface-transformers-library) πŸ‘‰ instructions to run ELM-Turbo with the Huggingface Transformers library.