We also have LlaVa and BakLlaVA, two multimodal models based on llama and the latter on mistral.
Evening_Ad6637
O.M.G. What an incredibly huge work! Wtf?! I am speechless.
You are the most angel like wolf i know so far and you really really deserve a price dude!
Again: WTH?!
Yeah I dont think authors are intentionally bullshitting or intentionally doing "benchmark cosmetics", but maybe it's more lack of knowledge on whats going on in terms of (most of) benchmarks and their the image that has become ruined in the meantime.
heheh i can't read that any more.. i really have become very prejudiced when comes to that.. to be honest, when it comes to any comparison with GPT-4.
People have really to understand that even GPT-4 has been aligned, lobotomized and it has been massively downgraded in terms of its perfomance – due to security reasons (what is understandable for me), but anyway this thing still is an absolute beast. if we consider all the restrictions GPT-4 has to undergo, all the smartness at openAI, all the ressources at microsoft and so on, we have to realize that currently nothing is really comparable to GPT-4. Especially not 7B models.
Yes it means predict n tokens. Is it not easy to understand? I might change it back... For me it is important that an ui is also not overbloated with "words" and unfortunately "Predict_n Tokens".. how can I say.. it 'looks' aweful. So I am looking for something more aesthetic but also easy to understand. It's difficult for me to find.
That's a pretty good idea! thanks for your input. I will definitely make a note of it as an issue in my repo and see what I can do.
Thank you for saying that. It makes me feel valued for my work. I've already made a pull request and Gerganov seems to like the work in general, so he would accept a merge. I still need to fix a few things here and there though - the requirements at the llama.cpp dudes are very high : D (but i don't expect anything else there heheh)
did you cloned it from my repo?
u/ambient_temp_xeno ah I have now seen that min-p has been implemented in the server anyway, so I have now added it too.
Yes the openai playground was my styling inspiration. I thought this is good since a lot of users will used to it.
the llama.cpp dev (gerganov) already answered and accepts a merge : ))
Ah one sidenote: selecting a model via dialog is absolutely not intuitive. If you want to navigate into a folder, you have to press space two times. Do not press enter until you decide to choose a specific folder. It doesnt matter that much if you are in parent folders, since the script will search recursively - but of course if you have many files it could take a long time.
thanks for your feedback. that's strange, I couldn't reproduce this bug (or I didn't understand the error?)
I'll answer you on github more detailed.