This is not true, I have split two separate LLM models partially across a 4090 and a 3080 and have had them both run inference at the same time.
This can be done in oobabooga's repo with just a little tinkering.
This is not true, I have split two separate LLM models partially across a 4090 and a 3080 and have had them both run inference at the same time.
This can be done in oobabooga's repo with just a little tinkering.