this post was submitted on 09 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Hey All,

what does making a model prediction look like for your current projects? Are you building a model for a web-app and you're running on-demand inference? Are you working on a research project or doing some analysis that requires making hundreds of thousands to millions of predictions all at once?

I'm currently at a crossroads with a developer tool I'm building and trying to figure out which types of inference workflows I should be focused on. A few weeks back I posted a tutorial on running Mistral-7B on hundreds of GPUs in the cloud in parallel. I got a decent amount of people saying that batch inference is relevant to them but over the last couple of days I've been running into more and more developers that are building web-apps that don't need to make many predictions all at once. If you were me where would you direct your focus?

Anyways, I'm kinda rambling but I would love to know what you guys are working on and get some advice on the direction I should pursue.

you are viewing a single comment's thread
view the rest of the comments
[–] AdamDhahabi@alien.top 1 points 1 year ago (2 children)

I think batched inference is a must for companies who want to put an on-premise chatbot in front of their users. This is a use case many are busy with at the moment. I saw llama.cpp now supports batched inference, only since 2 weeks, I don't have hands-on experience with it yet.

[–] Ok_Post_149@alien.top 1 points 1 year ago

Thanks for this feedback, what is your definition of an on-prem chatbot? Hosted on their physical infrastructure?

[–] matkley12@alien.top 1 points 1 year ago

Does llama.cpp support batch inference on CPU ?