maddes8cht commited on
Commit
9c9f851
1 Parent(s): de23514

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()
5
+
6
+ ## I am still building the structure of these descriptions.
7
+
8
+ These will contain increasingly more content to help find the best models for a purpose.
9
+
10
+ # openbuddy-falcon-7b-v6-bf16 - GGUF
11
+ - Model creator: [OpenBuddy](https://huggingface.co/OpenBuddy)
12
+ - Original model: [openbuddy-falcon-7b-v6-bf16](https://huggingface.co/OpenBuddy/openbuddy-falcon-7b-v6-bf16)
13
+ ## Note:
14
+
15
+ This is v6 of OpenBuddy's Falcon-7b Variant. Somehow they forgot to provide a real `Model Card` for v6, so refer to the v5 `Model Card` instead:
16
+
17
+ https://huggingface.co/OpenBuddy/openbuddy-falcon-7b-v5-fp16
18
+
19
+ OpenBuddy provides strong multiligual Model variants. On their Huggingface Organization Card they say:
20
+
21
+ > Our mission with OpenBuddy is to provide a free, open, and offline-capable AI model that operates on users' devices, irrespective of their language or cultural background. We strive to empower individuals worldwide to access and benefit from AI technology.
22
+
23
+ # About GGUF format
24
+
25
+ `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
26
+ A growing list of Software is using it and can therefore use this model.
27
+ The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
28
+
29
+ # Quantization variants
30
+
31
+ There is a bunch of quantized files available. How to choose the best for you:
32
+
33
+ # legacy quants
34
+
35
+ Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
36
+ Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
37
+ Falcon 7B models cannot be quantized to K-quants.
38
+
39
+ # K-quants
40
+
41
+ K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance.
42
+ So, if possible, use K-quants.
43
+ With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
44
+
45
+
46
+ # Original Model Card:
47
+
48
+
49
+ <center>
50
+ <a href="https://maddes8cht.github.com"><img src="/assets/buttons/maddes8cht-github-io.jpg" alt="GitHub" /></a>
51
+ <a href="https://stackexchange.com/users/26485911"><img src="https://stackexchange.com/users/flair/26485911.png" width="208" height="58" alt="profile for maddes8cht on Stack Exchange, a network of free, community-driven Q&amp;A sites" title="profile for maddes8cht on Stack Exchange, a network of free, community-driven Q&amp;A sites"></a>
52
+ <a href="https://github.com/maddes8cht"><img src="/assets/buttons/github-button.jpg" alt="GitHub" /></a>
53
+ <a href="https://huggingface.co/maddes8cht"><img src="/assets/buttons/huggingface-button.jpg" alt="HuggingFace" /></a></p>
54
+ <a href="https://twitter.com/maddes1966"><img src="/assets/buttons/twitter-button.jpg" alt="HuggingFace" /></a></p>
55
+ </center>