Concedo
concedo
AI & ML interests
Hi, I'm Concedo, also known as LostRuins. I'm the dev of KoboldCpp.
Organizations
concedo's activity
Eval request - mini magnum unboxed
#23 opened about 2 months ago
by
concedo
Is there a plan for a magnum V2 version of this model?
4
#1 opened about 2 months ago
by
SaisExperiments
Upload folder using huggingface_hub
2
#1 opened about 2 months ago
by
concedo
Upload folder using huggingface_hub
2
#3 opened about 2 months ago
by
concedo
[WIP] Upload folder using huggingface_hub (multi-commit ac840cbd2fa2506a8672ed415d6d4f8913465bff5e1a3567c66ab06c6974e87f)
#2 opened about 2 months ago
by
concedo
Kobo
3
#1 opened 2 months ago
by
TheDrummer
[bug] KoboldCpp 1.71.1 wrong detection of quants not only llama type and cause quality loss
2
#1 opened 2 months ago
by
softfluffyboy
The tiger models tend to go crazy on LM Studio ?
8
#4 opened 3 months ago
by
Dihelson
K quants should not contain IQ4_NL types inside
4
#4 opened 3 months ago
by
concedo
Add more models
1
#1 opened 5 months ago
by
xzuyn
🏆.Best NSFW 11B model ever made.💦💦💦💦
3
#4 opened 5 months ago
by
Ransss
Re-quant?
5
#5 opened 5 months ago
by
BlueNipples
🚩 Report: Legal issue(s)
40
#2 opened 5 months ago
by
chrisjcundy
Reconverted and requantized with latest GGUF to fix llama3 tokenizer
10
#5 opened 5 months ago
by
concedo
May need reconversion
1
#1 opened 5 months ago
by
concedo
Upload 7 files
1
#1 opened 5 months ago
by
concedo
May require reconversion due to llama.cpp enhancements
8
#1 opened 5 months ago
by
concedo
Training Mistake, model is ruined.
4
#1 opened 6 months ago
by
concedo
LLaMa-3-8B mmproj?
4
#2 opened 6 months ago
by
Joseph717171
Regarding Apr 3 conversion
2
#1 opened 6 months ago
by
concedo
Requesting Q2_K and Q3_K_S
1
#1 opened 6 months ago
by
concedo
This is an excellent model
#3 opened 12 months ago
by
concedo
Context Size
6
#1 opened over 1 year ago
by
Lumpen1
Interesting stats
11
#25 opened over 1 year ago
by
BBLL3456
cool idea! did it work?
1
#2 opened over 1 year ago
by
ehartford
How to configure koboldcpp?
3
#3 opened over 1 year ago
by
kexul
I can't get GGML GPU accelleration to work with Wizard-Vicuna-30B 5_1?
10
#3 opened over 1 year ago
by
Goldenblood56
Upload folder using huggingface_hub
2
#1 opened over 1 year ago
by
concedo
llama.cpp breaks quantized ggml file format
4
#11 opened over 1 year ago
by
Waldschrat
Tutorial
13
#4 opened over 1 year ago
by
tahaw863
Could we get this model in ggml format?
4
#2 opened over 1 year ago
by
concedo
Adding `safetensors` variant of this model
1
#1 opened over 1 year ago
by
SFconvertbot