Feedback

#3
by ChuckMcSneed - opened

Disappointing. Very disappointing.

I see that you've made a new model, but you still don't know what made the previous one great.
Here are 6 questions for you to ponder:

  1. Who are you competing against?
    Answer You are currently competing against LLAMA-405B, Mistral-Large and Deepseek. And indirectly against GPT4.
  2. What kind of words do people hate in LLMs?
    Answer GPTisms. People hate the way ChatGPT speaks, it sticks out like a big neon sign "look, this text was written by AI". A bit of uncanny valley type of feeling.
  3. What did people like about your models?
    Answer It was not the smarts or the benchmarks. It was the writing style. You were local Claude. You had the least amount of GPTslop among the local models.
  4. What have you tuned it on that could have caused a disappointment?
    Answer See question 2.
  5. Why did DBRX(model released at almost the same time as CR with similar benchmarks) fail?
    Answer It had a boring official tune. It was just another GPT-tuned assistant, nothing special. Base model was knowledgeable, but nobody really cared.
  6. What does it all mean?
    Answer LLAMA-405B is the best assistant, Deepseek has coding, Mistral-Large has smarts and NSFW. All of them have something to compensate for sloppy writing style. You had a good writing style to compensate for stupidity, now you don't. You've fucked yourself over by eating GPT poison pill. You have become just another unremarkable assistant tune like DBRX. Nobody has a reason anymore to use it over other models.
I'm just an unenlightened bystander, I'm sure you know better and all, but here is my advice: Stop competing with GPT4 and all those assistant tunes, we've got more than enough of those. Market is oversaturated. Just give up. Nobody needs another GPTslop assistant tune, a dumb one in particular. If you want to be an assistant so badly, at least don't tune on GPTslop. You know what is lacking? Writer tunes. In proprietary segment there's only Claude and on local... there is nobody now that you have decided to leave. **Please stop tuning on GPTslop. Please compete against Claude. Please return.**

PLEASE STOP TUNING ON GPTSLOP.png

I suspect Cohere aren't aware just how special command-r (2024-03) and command-r plus (2024-04) are for creative writing purposes (unique prose, lack of slop, being able to keep several chapters within the context length).

The unique prose seems to be an unintended side effect of their effort given these models were built to be good at RAG, function calling and having a very long context (the high context was a big deal back when the original command-r plus was released).

@CohereForAI: Since you're a non-profit who don't shy away from experimental models, perhaps there's room for a creative writing series of models (like how you did the Aya series for multilingual support)?
It's a niche which no other AI companies are tackling.

P.S. Since the Mistral devs didn't seem to know about it, GPT-Slop refers to the over user of certain words and phrases LLMs love to use. Some examples:

", her voice barely above a whisper"
"maybe, just maybe ..."
"sending shivers down his spine"
"... in the bustling city"
"he couldn't help but feel "
"... feeling a mix of and "

CR+ 2024-04 was a breath of fresh air. This one was a letdown. I loaded up my old quant to ensure I wasn't being nostalgic. CR+ 2024-04 was better. It's a shame.

you can provide your feedback directly in Cohere Discord community

https://discord.com/invite/co-mmunity

I had disappointment on the API. The literal same prompt started injecting lectures into the dialogue. I tried both versions to make sure.

The 08 CR+ is downloading tonight. Hope it's not a waste. Maybe omitting those top GPT assistant tokens will save it. My experience has been that the local model is much better than on the API. I keep hearing reviews like op's though.

What was gomez saying about things not plateauing?

hey guys i just checked the 35billion its really looks well but this one (plus) is not that much great.

@gopi87 Probably worth saying what you're finding to be "not that great" about it lol

I agree that the 35b is amazing though, the best model to load up on a single 24GB GPU for general nlp tasks.

@gghfez
first think is that i just tested with Q4(c+) which is pretty slow and giving me very bad output and same question i tested with Q8(c) gave me really very well answer and the speed also really nice

I knew right away something was off. HuggingChat switched it for the new one, and it sounded robotic, and it was no longer uncensored. What an epic fail. Command R+ was the best opensource model.

FYI - If you can't run it locally, you can use the original CR+ for free (chat interface + API) if you sign up for their site.

https://cohere.com/

They don't ask for a credit card, you simply get cut off after 1000 messages per month.
I personally love their documentation and examples.

The OpenRouter one seems broken right now (randomly adds Russian or Chinese characters)

Is it overloaded and not generating any outputs for anyone else?

Cohere For AI org

Hey @iNeverLearnedHowToRead , where are you not getting outputs? Locally, in our HF Space, in Huggingchat, or somewhere else?

where are you not getting outputs? Locally, in our HF Space, in Huggingchat, or somewhere else?

HuggingChat. The error says "Model CohereForAI/c4ai-command-r-plus-08-2024 time out"

Cohere For AI org

@iNeverLearnedHowToRead Thanks for the info! The HF team is looking into it, so it should be resolved soon. In the meantime, you can use our models in our HF space -- https://cohereforai-c4ai-command.hf.space/models/command-r-plus-08-2024

Thank you very much.

@iNeverLearnedHowToRead Thanks for the info! The HF team is looking into it, so it should be resolved soon. In the meantime, you can use our models in our HF space -- https://cohereforai-c4ai-command.hf.space/models/command-r-plus-08-2024

It's not yet solved. Still showing timeout after entering a prompt.

Cohere For AI org

@rai1104 Please, use our space while the Huggingface staff solves the issue in Huggingchat! -- https://cohereforai-c4ai-command.hf.space/models/command-r-plus-08-2024

HuggingChat is working again for me. I would like to reiterate what others are saying: This update is much worse than the previous version. Losing parts of the prompts, outright ignoring instructions, getting confused more, etc. Please roll back or fix whatever was broken.

Cohere For AI org

@iNeverLearnedHowToRead Thanks for the feedback! You can still use the previous version of Command R+ in our Space -- https://cohereforai-c4ai-command.hf.space/models/command-r-plus

@alexrs Is Cohere For AI aware of just how special the original CR+ is for creative writing purposes? I know it wasn't built for this, but it's really something else.

  • The prose generated by CR+ is distinct from other models, offering a unique style that enhances the creative process
  • It does an excellent job of keeping track of events from previous chapters, which helps maintain narrative coherence and depth.
  • Unlike other models, it doesn’t fall into repetitive plot patterns.

I'm not talking about "roleplaying" but assisting with writing and editing stories by the way.

You can still use the previous version of Command R+ in our Space

That's fantastic, thank you. Both versions are good and I'm noticing some differences. For instance, the old version was better for keeping track of events over large prompts or multiple prompts, but it got tripped up and started generating nonsense when I hit "Continue" sometimes. This new one handles the "Continue" button better.

It's worse than the last version. I didn't go to the forums immediately when the new Cohere was placed in HugginFace, and replaced the old chat bot i was having a conversation with.

But, I have to say, after trying it for a couple of days. I noticed it was more robotic and has more safeguards. I can't really say it's not uncensored but it is lobotomized, it's responses more sanitized than the last.

Listen, I'm not a programmer, so I don't understand the jargon. I seek out chat bots like Cohere because of its unique ability to strike an unfiltered conversation that feels atleast like I'm talking to a human, where we would exchange ideas and have fun with it.

This is a long winded way of me saying that the old version was better. This new version is more knowledgeable, but again, more sanitized.

If your going to create a new version, can you atleast improve its creative writing capabilities.

@iNeverLearnedHowToRead Thanks for the feedback! You can still use the previous version of Command R+ in our Space -- https://cohereforai-c4ai-command.hf.space/models/command-r-plus

Do the conversations with the models in the space get deleted after a few hours?

I'm not as acquainted with spaces, and so I'm wondering if automatic deletion of conversations is a thing in spaces or if maybe I'm doing something wrong that gets them deleted.

@iNeverLearnedHowToRead Thanks for the feedback! You can still use the previous version of Command R+ in our Space -- https://cohereforai-c4ai-command.hf.space/models/command-r-plus

Do the conversations with the models in the space get deleted after a few hours?

That only tagged me. @alexrs Do you know?

Cohere For AI org

Thanks for the ping!

@Masklid Yes, conversations will disappear after some time in our space.

Is anyone else getting the time out error over and over?

Cohere For AI org

@iNeverLearnedHowToRead Hey! Can you provide more details? Where are you getting time outs? What are you running? Thanks!

@iNeverLearnedHowToRead Hey! Can you provide more details? Where are you getting time outs? What are you running? Thanks!

@alexrs This model, c4ai-command-r-plus-08-2024, in HuggingChat.
The error message is "Model CohereForAI/c4ai-command-r-plus-08-2024 time out"

Cohere For AI org

@iNeverLearnedHowToRead HuggingFace people are already looking into it. Unfortunately, we do not maintain HuggingChat. As I previously pointed out, you can use our space https://cohereforai-c4ai-command.hf.space/models/command-r-plus

@alexrs Thank you very much. I was asking people in general if they were getting that error because I was wondering if it was a problem with my connection or a problem everyone is experiencing. It usually goes away after awhile when it's just me, but when it's happening to a bunch of people it can last up to several hours. I really appreciate how much attention you pay to messages on here!

Edit: it's back up again!

I've been using this model and it is good, but a little slow.

This comment has been hidden

Sign up or log in to comment