File size: 2,865 Bytes
9c9f851
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9bb2cdc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
license: apache-2.0
---
[![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()

## I am still building the structure of these descriptions.

These will contain increasingly more content to help find the best models for a purpose.

# openbuddy-falcon-7b-v6-bf16 - GGUF
- Model creator: [OpenBuddy](https://huggingface.co/OpenBuddy)
- Original model: [openbuddy-falcon-7b-v6-bf16](https://huggingface.co/OpenBuddy/openbuddy-falcon-7b-v6-bf16)
## Note:

This is v6 of OpenBuddy's Falcon-7b Variant. Somehow they forgot to provide a real `Model Card` for v6, so refer to the v5 `Model Card` instead:

https://huggingface.co/OpenBuddy/openbuddy-falcon-7b-v5-fp16

OpenBuddy provides strong multiligual Model variants. On their Huggingface Organization Card they say:

> Our mission with OpenBuddy is to provide a free, open, and offline-capable AI model that operates on users' devices, irrespective of their language or cultural background. We strive to empower individuals worldwide to access and benefit from AI technology.

# About GGUF format

`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov

# Quantization variants

There is a bunch of quantized files available. How to choose the best for you:

# legacy quants

Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
Falcon 7B models cannot be quantized to K-quants.

# K-quants

K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance.
So, if possible, use K-quants.
With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.


# Original Model Card:
[![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)[![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911)[![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht)[![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht)[![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966)