None
Thireus
·
AI & ML interests
None yet
Organizations
None yet
Thireus's activity
5.5 bits and 6bits per weight?
#1 opened 17 days ago
by
Thireus
CORRECTION: THIS SYSTEM MESSAGE IS ***PURE GOLD***!!!
16
#33 opened 29 days ago
by
jukofyork
FP8 Checkpoint version size mismatch?
2
#15 opened 2 months ago
by
Thireus
What is the "system" prompt?
3
#13 opened 6 months ago
by
kk3dmax
First Word Ignored Issue / Single Word Instruction
20
#11 opened 6 months ago
by
pandora-s
Prompt format
1
#12 opened 6 months ago
by
Thireus
Incomplete Answers
7
#59 opened 10 months ago
by
samparksoftwares
Weird answers for short instructions such as "Hi"?
#2 opened 6 months ago
by
Thireus
Instruct version?
4
#13 opened 6 months ago
by
Thireus
Higher PPL than Mixtral?
3
#11 opened 10 months ago
by
Thireus
exl2-2 please?
8
#1 opened 10 months ago
by
Thireus
Safetensors version?
6
#2 opened 11 months ago
by
matatonic
Casual system prompts give very bad results
#7 opened 11 months ago
by
Thireus
tokenization_yi.py
6
#1 opened 11 months ago
by
Stilgar
No need to trust_remote_code
1
#6 opened 11 months ago
by
Weyaxi
Orca 2 upgrade planned?
1
#6 opened 11 months ago
by
Thireus
3bpw not working anymore
6
#1 opened 11 months ago
by
mpasila
Keep up the good work!
3
#3 opened 11 months ago
by
Thireus
exl2 quanitzed version
3
#1 opened 11 months ago
by
Thireus
Any positive results so far?
1
#1 opened 11 months ago
by
Thireus
Any plans for V1.1?
1
#17 opened 11 months ago
by
Thireus
h8 instead of h6 for 8bpw versions
1
#1 opened 11 months ago
by
Thireus
Really impressive!
15
#1 opened 11 months ago
by
Thireus
Update bin2safetensors/convert.py
#2 opened about 1 year ago
by
Thireus
Script used to convert the original model
#1 opened about 1 year ago
by
Thireus
Update bin2safetensors/convert.py
1
#1 opened about 1 year ago
by
Thireus
Convertion process
3
#1 opened about 1 year ago
by
Thireus
Prompt format?
1
#3 opened about 1 year ago
by
Thireus
Prompt format?
1
#9 opened about 1 year ago
by
Thireus
I wonder what is the difference between version 1.1 and 1.2
16
#2 opened about 1 year ago
by
Flanua
GGML version possible/coming?
2
#8 opened about 1 year ago
by
Thireus
Very good model, suprisingly better than WizardLM-30B, why?
1
#2 opened over 1 year ago
by
Thireus
Best parameters for 24GB VRAM?
1
#2 opened over 1 year ago
by
Thireus
Prompt format?
1
#1 opened over 1 year ago
by
Thireus
Prompt format?
1
#1 opened over 1 year ago
by
Thireus
Prompt format?
5
#1 opened over 1 year ago
by
Thireus
Excellent model - Any plans for further finetuning and/or 65b?
#6 opened over 1 year ago
by
Thireus
[Feature Request] Add timestamp (date - time) to evaluation tables
1
#43 opened over 1 year ago
by
Thireus
Unfortunately I can't run on text-generation-webui
11
#1 opened over 1 year ago
by
Suoriks
You're blazing fast!
9
#1 opened over 1 year ago
by
Thireus
Gibberish on 'latest', with recent qwopqwop GPTQ/triton and ooba?
7
#2 opened over 1 year ago
by
andysalerno
Prompt Format?
7
#1 opened over 1 year ago
by
teknium
ggml-vic13b-uncensored-q5_1.bin and ggml-vic13b-uncensored-q8_0.bin throw errors in newest oobabooga-webui
2
#14 opened over 1 year ago
by
RandomLegend
[GUIDE] Launch Q5_1 model with oobabooga's text-generation-webui
5
#5 opened over 1 year ago
by
Thireus
Your converted model performs better, and I don't understand why
3
#4 opened over 1 year ago
by
Thireus
here is a google colab with 1.1 support
6
#3 opened over 1 year ago
by
eucdee
Cannot load the model
26
#2 opened over 1 year ago
by
horstao
do you have 8bit version? with 8bit works better than 4bit?
3
#13 opened over 1 year ago
by
cowboymind
eachadea/vicuna-13b-1.1 vs TheBloke/vicuna-13B-1.1-HF
7
#2 opened over 1 year ago
by
Thireus