Jack Boot
jackboot
AI & ML interests
None yet
Organizations
None yet
jackboot's activity
There's a HUGE drop in popular knowledge from v2 to v2.5.
21
#1 opened 18 days ago
by
phil111
Meta-Llama-3-70B-Instruct or Meta-Llama-3.1-70B-Instruct ?
9
#20 opened 30 days ago
by
mbanik
The `safety_mode` parameter?
3
#2 opened about 1 month ago
by
jukofyork
Feedback
33
#3 opened about 1 month ago
by
ChuckMcSneed
nothing between 4.0 and 6.0?
1
#2 opened about 1 month ago
by
jackboot
The tokenizer has changed just fyi
12
#2 opened 2 months ago
by
bullerwins
please add flux1-schnell GGUF model
3
#1 opened about 2 months ago
by
rafiislam
combined safetensors , but comfyui issue a error.
9
#3 opened about 2 months ago
by
demo001s
Plain unet can load?
5
#2 opened about 2 months ago
by
jackboot
Did something go wrong during training?
6
#6 opened 2 months ago
by
CamiloMM
EXL2 quants
16
#3 opened 2 months ago
by
Cytho
max_position_embeddings update in config.json
3
#2 opened 2 months ago
by
Inktomi93
Error when loading about bias
1
#2 opened 2 months ago
by
jackboot
"Experimental" = ZeroWw method.
37
#7 opened 3 months ago
by
ZeroWw
Hallucinations, misspellings etc. Something seems broken?
21
#10 opened 3 months ago
by
sam-paech
Please open mouth kiss the homies.
6
#1 opened 4 months ago
by
snombler
🚩 Report: Ethical issue(s)
4
#3 opened 4 months ago
by
Zedax
Some issues
3
#3 opened 4 months ago
by
Sierra369
Why doesn't it work with bitsnbytes 8 or 4bit?
3
#25 opened 4 months ago
by
jackboot
Concerns regarding design decisions based on purely academic benchmarks.
15
#1 opened 4 months ago
by
deleted
Anyone have the tokenizer?
5
#1 opened 4 months ago
by
jackboot
Still get refusals.
2
#1 opened 4 months ago
by
jackboot
Dark 103B 120B
9
#5 opened 5 months ago
by
BigHuggyD
Very nice model.
116
#3 opened 5 months ago
by
Iommed
Would love to try a quantized version!
27
#2 opened 5 months ago
by
dillfrescott
Concerns regarding Prompt Format
6
#1 opened 5 months ago
by
wolfram
🚩 Report: Legal issue(s)
40
#2 opened 5 months ago
by
chrisjcundy
Adapter is 48 bytes?
#1 opened 5 months ago
by
jackboot
Any chance of a 3.75bpw?
5
#3 opened 6 months ago
by
gghfez
3.75 please?
2
#2 opened 6 months ago
by
jackboot
Hi, I made gptq quant.
12
#3 opened 7 months ago
by
Kotokin
Feedback
6
#6 opened 7 months ago
by
Szarka
Why not post the lora?
1
#2 opened 7 months ago
by
jackboot
Exllamav2 Quants
4
#2 opened 8 months ago
by
llmixer
Maybe you can put some sillytavern settings? contects? and preset?
3
#7 opened 8 months ago
by
vurkan
How to use this?
3
#1 opened 8 months ago
by
steampunk333
Proper prompt?
6
#3 opened 8 months ago
by
damarges
Sticking a restrictive license on a model that's not even yours to begin with?
2
#14 opened 8 months ago
by
candre23
Chat template
9
#11 opened 8 months ago
by
MaziyarPanahi
Might consider attribution
106
#10 opened 8 months ago
by
arthurmensch
miqu truck
1
#1 opened 8 months ago
by
distantquant
Model
25
#5 opened 8 months ago
by
mrfakename
Is a higher quant possible?
2
#1 opened 8 months ago
by
jackboot
Please upload the full model first
88
#1 opened 8 months ago
by
ChuckMcSneed
I'm thinking miqu
11
#3 opened 8 months ago
by
modster
Base? Chat? What format is this?
1
#1 opened 9 months ago
by
jackboot
Finally decent results.
#1 opened 9 months ago
by
jackboot
The instruct preset and talking for you.
2
#2 opened 9 months ago
by
jackboot
Will it squeeze into 48?
4
#1 opened 9 months ago
by
jackboot
How is it working?
1
#1 opened 10 months ago
by
jackboot
How is it?
2
#1 opened 10 months ago
by
jackboot
Not for AI characters.
2
#1 opened 10 months ago
by
jackboot
Any chance for an EXL2 version?
15
#2 opened 11 months ago
by
wolfram
Can you upload qlora adapter files?
6
#1 opened 11 months ago
by
adamo1139
llama-compatibility
34
#11 opened 11 months ago
by
ehartford
Can't download
7
#2 opened 12 months ago
by
jackboot
Model is broken in latest llama.cpp
1
#1 opened 12 months ago
by
jackboot