this post was submitted on 15 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Dear friends,

I decided to write because many are active on HuggieFace with their AI models.

I have been continuously testing AI Models 8/10 hours a day for a year now. And when I say that I test the models, I don't mean like many do on YouTube to get likes, with type tests. Tell me what the capital of Australia is or tell me who the tenth president of the United States is. Because these tests depress me as well as making me smile. Already 40 years ago my Commodore Vic 20 answered these questions in BASIC language!

I test models very seriously. Being a history buff, my questions are very oriented towards history, culture, geography, literature. So my tests are to try in every way to extrapolate answers and summaries to the AI ​​models.

Now I note with great sadness that models are trained with a lot of data, but there is not enough focus on ensuring that the algorithm is able to extrapolate the data and return it to the user in a faithful and coherent manner.

Now if we want to use templates just to play with creative invented stories like poetry everything can be fine, but when we get serious Open Source templates to be installed locally seem very insufficient to me.

Furthermore, I note that the models are never accompanied in an excellent manner by configuration or preset data which the user often has to try to understand by making various calibrations.

Another issue, the models are always generic, there is no table of models with their actual capabilities.

More guidance would be needed. example This is a model that is good for Medicine, this has been trained with History data etc.

While we find ourselves researching Huggingface in an almost haphazard manner, not to mention total disarray

In Pavero words I want to tell you, since you work hard, you too should ensure that the models, in addition to being filled with data, can then be able to use them and give them to the user.

So let's take a step forward and improve the progress.

Claudio from Italy

you are viewing a single comment's thread
view the rest of the comments
[–] FullOf_Bad_Ideas@alien.top 1 points 10 months ago (1 children)

Furthermore, I note that the models are never accompanied in an excellent manner by configuration or preset data which the user often has to try to understand by making various calibrations.

Do you mean prompt template? They are provided by more popular makers of fine-tunes (word "trainers" doesn't sit well with me) but sometimes documentations are lacking. When fine-tuning is as easy as it is right now, writing a good documentation doubles the effort, so I understand that. Myself I prefer spending time on generating dataset or tweaking fine-tuning settings rather then documenting things, that's kind of a given most people will prefer to do fun stuff in their free time - for vast majority of us it's a new cool hobby and not paid work, so tedious stuff is left undone.

As for the rest - hallucinations are a hard problem to solve. You can try using something like veryLLM to reduce them a bit. I don't think there's a fix for this, or any major hobbyist community effort.

[–] Creative_Bottle_3225@alien.top 1 points 10 months ago (1 children)

when you do serious historical research, time passes so quickly that you don't even notice. Unfortunately it takes time for the model to give you the information it has stored. However, even with effort I achieved notable results. example the translation of a writing in Runic. something that is difficult today even for specialized people. This is why I say that we need to waste more time making the algorithm understand its purpose. If we limit ourselves to making them just to play it's a shame . I have tried many, from the most recent to the fastest. lately I've been having the best results running utopia-13b.Q4_K_M.gguf which offers me a decent speed, a passionate tone with friendly dialogue and above all decidedly aware in trying to give accurate results

[–] FullOf_Bad_Ideas@alien.top 1 points 10 months ago (1 children)

What are your thoughts on Llama 1 65B, Llama 2 70B, Mistral 7B and Yi-34B models? I was never too fond of using llama 13B, either the first or second version, since you could always find better responses in bigger models.

[–] Creative_Bottle_3225@alien.top 1 points 10 months ago

I think that the difference in GB from 7 to 13 is mainly due to the capacity of the data stored and able to be queried