Exclusively 70B models. Current favorite is:
- Role-playing: lzlv 70B GPTQ on gptq-4bit-32g-actorder_True
Although ask me again a week from now and my answer will probably change. That's how quick improvements are.
Exclusively 70B models. Current favorite is:
Although ask me again a week from now and my answer will probably change. That's how quick improvements are.
Okay, but when are they going to get Amazon Alexa into an actual AI?
Toner held firm in her belief that Altman shouldn't be at the helm of OpenAI after Sutskever reversed course, and said during those initial reinstatement discussions that because the company charter charges its board with creating AI that "benefits all of humanity," it was more consistent with that mission that the company be destroyed in Altman's absence than see him as its chief executive again.
https://www.yahoo.com/news/openai-board-apparently-seething-rage-195257117.html
Wow, so this bitch wanted to bring the whole company down rather than allow Sam Altman back on. Unbelievable. EA advocates can walk the plank.
It's like when Tony Stark was in a cave and made a prototype Ironman suit.
Damn, no 13B?
Then they should be furious with Sutskever for wanting to slow things down. Slowing things down is not in the best interest of their shareholders. Sutskever needs to go, now and Sam Altman should be reinstated. Bring on the singularity.
We discover it was Jimmy Apples sending us inferences all this time.
According to TheBloke the Sequence Length is 8192 ctx, so I'm assuming 8192 ctx is its default and it can extend up to 200k ctx via alpha_scale?
I would shit a brick if they say, "Oh by the way everyone, we dropped a new Mistral model just now."
For real-time uses like Voxta+VaM, EXL2 4-bit is better
Wow, I didn't expect to see a Virt-a-Mate reference. You left no stone unturned and are doing God's work.
At the current moment I have not changed, but Wolfram released a good rankings list that makes me want to test Tess-XL-v1.0-120b and Venus-120b.
I'm using lzlv GPTQ via ST's Default + Alpaca prompt and didn't have misspelling issues. Wolfram did notice misspelling issues when using the Amy preset (e. g. "sacrficial") so maybe switch the preset?