this post was submitted on 21 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How many users do you have? If you've been keeping your inputs/outputs to gpt4, then you can probably use that to tune a your own model that will perform similarly.
The biggest issue you're going to have is probably hardware.
LLMs are not cheap to run, and if you start needing multiple of them to replace OpenAI, your bill is going to be pretty significant just to keep the models online.
It's also going to be tough to maintain all the infra you'll need without a full time devops/mlops person.