tejasvaidhya commited on
Commit
243a6a5
1 Parent(s): c504387

Fixing some spacing issue

Browse files
Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -10,13 +10,14 @@ pinned: false
10
 
11
  We release the Spectra Suite consisting of 54 models ranging from 99M to 3.9B parameters across different bitwidths:
12
 
13
- FloatLM: LLMs pretrained in FP16 (Half-Precision).
14
- TriLM: LLMs pretrained with effective ternary bitwidth.
15
- QuantLM 8-bit: FloatLM LLMs Quantized to 8-bits.
16
- QuantLM 6-bit: FloatLM LLMs Quantized to 6-bits.
17
- QuantLM 4-bit: FloatLM LLMs Quantized to 4-bits.
18
- QuantLM 3-bit: FloatLM LLMs Quantized to 3-bits.
19
- All models are released in unpacked (FP16 format) - compatible with FP16 GEMMs across any library supporting the LLaMa architecture.
 
20
 
21
  ## Citation
22
  If you find these models or the associated paper useful, please cite the paper:
 
10
 
11
  We release the Spectra Suite consisting of 54 models ranging from 99M to 3.9B parameters across different bitwidths:
12
 
13
+ * FloatLM: LLMs pretrained in FP16 (Half-Precision).
14
+ * TriLM: LLMs pretrained with effective ternary bitwidth.
15
+ * QuantLM 8-bit: FloatLM LLMs Quantized to 8-bits.
16
+ * QuantLM 6-bit: FloatLM LLMs Quantized to 6-bits.
17
+ * QuantLM 4-bit: FloatLM LLMs Quantized to 4-bits.
18
+ * QuantLM 3-bit: FloatLM LLMs Quantized to 3-bits.
19
+
20
+ All models are released in unpacked (FP16 format) - compatible with FP16 GEMMs across any library supporting the LLaMa architecture.
21
 
22
  ## Citation
23
  If you find these models or the associated paper useful, please cite the paper: