this post was submitted on 21 Nov 2023
1 points (100.0% liked)
LocalLLaMA
14 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm a little surprised by the mention of
chatcode.pywhich was merged intochat.pyalmost two months ago. Also it doesn't really require flash-attn-2 to run "properly", it just runs a little better that way. But it's perfectly usable without it.Great article, though. thanks. :)
Thanks for your excellent library! It makes sense because I started writing this article about two months ago (
chatcode.pyis still mentioned in theREADME.mdby the way). I had a very low throughput using ExLlamaV2 without flash-attn-2. Do you know if it's still the case? I updated these two points, thanks for your feedback.