this post was submitted on 12 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I know that vLLM and TensorRT can be used to speed up LLM inference. I tried to find other tools can be do such things similar and will compare them. Do you guys have any suggestions?

vLLM: speed up inference

TensorRT: speed up inference

DeepSpeed:speed up for training phrase

you are viewing a single comment's thread
view the rest of the comments
[–] OldAd9530@alien.top 1 points 10 months ago (1 children)

Do you have any idea why MLC isn't a more used format? It seems so much faster than GGUF or ExLlama architectures, yet everyone defaults to those

[–] mcmoose1900@alien.top 1 points 10 months ago

Thats an excellent question.