this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Hello,

With the new GPT4 and its 128k context window, my question is, is there an advantage to running a local LLM?

  • Cost Considerations: Is there a cost benefit to running a local LLM compared to OpenAI?

  • Specialized Training: Is it possible to train the model for specific tasks, similar to 'Code Llama - Python', but perhaps for areas like 'Code Llama - Unreal Engine'?

I understand that for some applications, avoiding the content restrictions of OpenAI might be a plus. However, when it comes to using an LLM as a coding assistant, are there any specific advantages to running it locally?

you are viewing a single comment's thread
view the rest of the comments
[–] Chaosdrifer@alien.top 1 points 10 months ago (1 children)

Cost is really the main issue. You can train a local LLM, or you can train ChatGPT as well. I wouldn’t be surprised if someone is already making a custom GPT for helping with unity of unreal engine projects.

For Privacy, company with money will use a private instance from Azure, it is like 2-3 times the cost , but your data is safe as you have a contract with MS to keep it safe and private, with large financial penalties if it isn’t.

Also, running LLM locally isn’t 0 cost, depending on the electricity price of your area. GPU consume a LOT of power. The 4090 is like 460 watts.

[–] krazzmann@alien.top 1 points 10 months ago

Azure allows to opt out of content logging for their OpenAI services. You need to apply for this option. This is what most larger companies, like my employer, are doing.