BlissfulEternalLotus

joined 1 year ago

Got it. Thanks. I will try it.

 

I want to try auto-gen locally. So I wanted to use a good coding LLM to work with it. Any suggestions ?
My system only have 16 gb ram and can only run 7b model with ok speed.

 

I frequently go through all the LLMs that *The Bloke* posts from time to time. What frustrates me is that there is no information about LLMs specialization. Like is it for **Coding, Roleplaying, Creative Writing etc.**
I then go to original LLM's place. But it sometime contain no information or drown me with technical details that I don't understand.

And as for the leader boards, they contain a bunch of numbers, I have to look up each to understand.

My only source is reddit posts and comments here.

Am I missing something ? From the looks of it, it feels like a common knowledge that I missed it some how. I feel like a kid that woke up to a quiz at the end of class after sleeping through it.

I wish they come up with some extendable tensor chips that can work with old laptops.

Currently only 7b is the only model we can run comfortably. Even for 13 b, it's slower and it needs quite a bit of adjustment.

 

In stable diffusion prompt, we put the prompt in brackets to emphasis it more. Is there any equivalent for it in LLM prompting ?

Some say emphasis goes from bottom to top. Some are they it's other way around. What do you thing is the right way and why ?