eachadea
eachadea
AI & ML interests
None yet
Organizations
None yet
eachadea's activity
Dataset Access
18
#1 opened about 1 year ago
by
puffy310
newest llama.cpp seems to crash now
7
#21 opened over 1 year ago
by
WizardDave
Upload ggml-model-f16.bin with huggingface_hub
#2 opened about 1 year ago
by
eachadea
Upload ggml-model-a-q4_K_M.bin with huggingface_hub
#1 opened about 1 year ago
by
eachadea
Upload ggml-model-f16.bin with huggingface_hub
#2 opened about 1 year ago
by
eachadea
Upload ggml-model-q4_K_M.bin with huggingface_hub
#1 opened about 1 year ago
by
eachadea
Update README.md
#22 opened over 1 year ago
by
VGS52
uncensored version
8
#16 opened over 1 year ago
by
Feng7815
Q5_X ggml models are not as accurate as the oldest version
4
#6 opened over 1 year ago
by
KinzyLong
Bug?
1
#9 opened over 1 year ago
by
ClaudioItaly
Are you going to redo ggml-vic13b-uncensored-q8_0.bin as well?
2
#19 opened over 1 year ago
by
WizardDave
Purpose of This Model & Binary Format
2
#1 opened over 1 year ago
by
danforbes
Commercial use
3
#18 opened over 1 year ago
by
Codgas
How to run these quantised model.
4
#17 opened over 1 year ago
by
Tarun1986
Unable to load with transformers library as config files are missing.
3
#4 opened over 1 year ago
by
mlwithaarvy
ggml-vicuna-13b-1.1-q4_3 unrecognized tensor type 5
6
#10 opened over 1 year ago
by
NicRaf
"Backwards compatibility" of a model?
5
#13 opened over 1 year ago
by
endolith
How do you use this?
3
#1 opened over 1 year ago
by
endolith
ggml-vic13b-uncensored-q5_1.bin and ggml-vic13b-uncensored-q8_0.bin throw errors in newest oobabooga-webui
2
#14 opened over 1 year ago
by
RandomLegend
AMD GPU Support?
1
#12 opened over 1 year ago
by
Wats0n
just saying it breaks whenever i get individual models to load for oobabooga
3
#5 opened over 1 year ago
by
hellothereeeee
Will a 1.1-uncensored follow?
3
#11 opened over 1 year ago
by
Wubbbi
running the model in Python
2
#3 opened over 1 year ago
by
Asaf-Yehudai
AutoModelForCausalLM.from_pretrained() error
1
#2 opened over 1 year ago
by
liyang31163150
vicuna 1.1 13b q4_1 failed to load (bad float16)
2
#9 opened over 1 year ago
by
couchpotato888
How to get running using fastchat on a m1 mac?
3
#7 opened over 1 year ago
by
kkostecky
Outstanding Model
4
#3 opened over 1 year ago
by
Phew
Memory requirement for 13B 4Bit
1
#4 opened over 1 year ago
by
afoam
SOLVED Running this v1.1 on llama.cpp
2
#3 opened over 1 year ago
by
JeroenAdam
My observation over the previous model
1
#2 opened over 1 year ago
by
samjack
how to get 30B vicuna
3
#6 opened over 1 year ago
by
baby1
where could get 30B
1
#1 opened over 1 year ago
by
baby1
The problem in GPTQ Vicuna and GGML Vicuna
1
#8 opened over 1 year ago
by
Feng7815
What's new?
2
#1 opened over 1 year ago
by
BBLL3456
what's the origin train data
2
#5 opened over 1 year ago
by
baby1
Could you try converting AlekseyKorshuk's "ethics filtering free" Vicuna model?
1
#6 opened over 1 year ago
by
wojhoiw
Missing config.json?
1
#1 opened over 1 year ago
by
eachadea
How to configure llama.cpp
1
#5 opened over 1 year ago
by
killerx7
What is "-rev1" ?
3
#5 opened over 1 year ago
by
pymike00
What is the source, not anon8231489123_vicuna-13b-GPTQ-4bit-128g?
1
#4 opened over 1 year ago
by
ai2p
ggml conversion error
6
#1 opened over 1 year ago
by
Reggie
Safetensors for Oobaboonga?
2
#4 opened over 1 year ago
by
manolofloyd
Tokenizer class LlamaTokenizer does not exist
8
#2 opened over 1 year ago
by
xerxes01
Where did you get model weights from
3
#2 opened over 1 year ago
by
tarunchand
Could someone put this file in a torrent, please?
4
#2 opened over 1 year ago
by
edmundronald
quantized model to 4 but or 8 bit
2
#1 opened over 1 year ago
by
thefcraft
Unable to run llama.cpp
9
#1 opened over 1 year ago
by
cestoliv