this post was submitted on 27 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is not true, I have split two separate LLM models partially across a 4090 and a 3080 and have had them both run inference at the same time.
This can be done in oobabooga's repo with just a little tinkering.