Creative_Bottle_3225

joined 1 year ago
[–] Creative_Bottle_3225@alien.top 1 points 11 months ago

what is the difference between normal and 16 K?

 

Proposal

I have used many AI models, some are fast, some are consistent, some are very good, some make long texts others explain well. I have come to my own conclusion that if there was a model mixed between zephyr beta which is fast but too academic and often disobedient and the Utopia 13B model which has a truly humanised language at its best, manages to make descriptions with a unique atmosphere, we would probably have rayunto a high level

[–] Creative_Bottle_3225@alien.top 1 points 11 months ago (1 children)

I tried this model a little while ago with LM Studio and I noticed that it does not have GPU acceleration. Sin

 

30,000 AI models

too many really. But from what I read in conversations and posts I notice one thing: you all try out Model all the time and that's fine, but I haven't yet read that anyone habitually uses one Model over others. It seems like you use one template for a few days and then start with a new one. Don't have your favorite? Which?

[–] Creative_Bottle_3225@alien.top 1 points 11 months ago

pansophic/rocket-3B

Model Card 🤗 ↗

Might Not Work (LMStudio )

[–] Creative_Bottle_3225@alien.top 1 points 11 months ago

What is it for?

[–] Creative_Bottle_3225@alien.top 1 points 11 months ago

I think that the difference in GB from 7 to 13 is mainly due to the capacity of the data stored and able to be queried

[–] Creative_Bottle_3225@alien.top 1 points 11 months ago

do you have to download 71GB to try it?! :-)

[–] Creative_Bottle_3225@alien.top 1 points 11 months ago (1 children)
[–] Creative_Bottle_3225@alien.top 1 points 11 months ago (2 children)

when you do serious historical research, time passes so quickly that you don't even notice. Unfortunately it takes time for the model to give you the information it has stored. However, even with effort I achieved notable results. example the translation of a writing in Runic. something that is difficult today even for specialized people. This is why I say that we need to waste more time making the algorithm understand its purpose. If we limit ourselves to making them just to play it's a shame . I have tried many, from the most recent to the fastest. lately I've been having the best results running utopia-13b.Q4_K_M.gguf which offers me a decent speed, a passionate tone with friendly dialogue and above all decidedly aware in trying to give accurate results

[–] Creative_Bottle_3225@alien.top 1 points 11 months ago

when you do serious historical research, time passes so quickly that you don't even notice. Unfortunately it takes time for the model to give you the information it has stored. However, even with effort I achieved notable results. example the translation of a writing in Runic. something that is difficult today even for specialized people. This is why I say that we need to waste more time making the algorithm understand its purpose. If we limit ourselves to making them just to play it's a shame

 

Dear friends,

I decided to write because many are active on HuggieFace with their AI models.

I have been continuously testing AI Models 8/10 hours a day for a year now. And when I say that I test the models, I don't mean like many do on YouTube to get likes, with type tests. Tell me what the capital of Australia is or tell me who the tenth president of the United States is. Because these tests depress me as well as making me smile. Already 40 years ago my Commodore Vic 20 answered these questions in BASIC language!

I test models very seriously. Being a history buff, my questions are very oriented towards history, culture, geography, literature. So my tests are to try in every way to extrapolate answers and summaries to the AI ​​models.

Now I note with great sadness that models are trained with a lot of data, but there is not enough focus on ensuring that the algorithm is able to extrapolate the data and return it to the user in a faithful and coherent manner.

Now if we want to use templates just to play with creative invented stories like poetry everything can be fine, but when we get serious Open Source templates to be installed locally seem very insufficient to me.

Furthermore, I note that the models are never accompanied in an excellent manner by configuration or preset data which the user often has to try to understand by making various calibrations.

Another issue, the models are always generic, there is no table of models with their actual capabilities.

More guidance would be needed. example This is a model that is good for Medicine, this has been trained with History data etc.

While we find ourselves researching Huggingface in an almost haphazard manner, not to mention total disarray

In Pavero words I want to tell you, since you work hard, you too should ensure that the models, in addition to being filled with data, can then be able to use them and give them to the user.

So let's take a step forward and improve the progress.

Claudio from Italy

I tried the model. since I do a lot of historical research I felt like checking it out. I was very disappointed. 1- he answers coldly and succinctly, often getting nervous 2 Even simple answers were 99 per cent wrong. Example I did donande on Hannibal during the Punic war in Spain. the results were demoralising

hello Greetings from Italy. I would like to ask you is there a 7B 0 13 B model that has a good historical data set to study?

view more: next ›