this post was submitted on 27 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

As requested, this is the subreddit's second megathread for model discussion. This thread will now be hosted at least once a month to keep the discussion updated and help reduce identical posts.

I also saw that we hit 80,000 members recently! Thanks to every member for joining and making this happen.


Welcome to the r/LocalLLaMA Models Megathread

What models are you currently using and why? Do you use 7B, 13B, 33B, 34B, or 70B? Share any and all recommendations you have!

Examples of popular categories:

  • Assistant chatting

  • Chatting

  • Coding

  • Language-specific

  • Misc. professional use

  • Role-playing

  • Storytelling

  • Visual instruction


Have feedback or suggestions for other discussion topics? All suggestions are appreciated and can be sent to modmail.

^(P.S. LocalLLaMA is looking for someone who can manage Discord. If you have experience modding Discord servers, your help would be welcome. Send a message if interested.)


Previous Thread | New Models

you are viewing a single comment's thread
view the rest of the comments
[–] silenceimpaired@alien.top 1 points 11 months ago (1 children)

It’s only been a day but have you changed? I find this model misspells a lot with the gguf i downloaded.

[–] ReMeDyIII@alien.top 1 points 11 months ago

At the current moment I have not changed, but Wolfram released a good rankings list that makes me want to test Tess-XL-v1.0-120b and Venus-120b.

I'm using lzlv GPTQ via ST's Default + Alpaca prompt and didn't have misspelling issues. Wolfram did notice misspelling issues when using the Amy preset (e. g. "sacrficial") so maybe switch the preset?