this post was submitted on 26 Nov 2023
1 points (100.0% liked)
LocalLLaMA
1 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 10 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The state-of-the-art on training and architecture is likely to improve over the next year alone, certainly over the next 2 or 3. It's also reasonable to expect cheaper hardware for running LLMs, since all the chip makers are working on it.
If you don't need a local LLM now but think it might save money only in the long run, it probably makes sense to wait and build one once we're better at it
Collating training data in the mean time probably makes sense. Recording as much as you can, encouraging employees to document more, etc. That data will be useful even in the absence of AI, and with improving AI technology it is likely to become more and more valuable every year. It also takes time to produce that data, and no one else can do it for you