Gnodax

joined 1 year ago
[–] Gnodax@alien.top 1 points 1 year ago

That all sounds like the typical symptoms when you feed too much generated content back into the context buffer. Limit the dynamic part of your context buffer to about 1k tokens. At least that's been my experience using 13B models as chatbots. With exllama you just add "-l 1280". Other systems should offer similar functionality.

If you want to get fancy, you can fill the rest of the context with whatever backstory you want.