Passing it proprietary code is probably on the top of the list.
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
Cost is really the main issue. You can train a local LLM, or you can train ChatGPT as well. I wouldn’t be surprised if someone is already making a custom GPT for helping with unity of unreal engine projects.
For Privacy, company with money will use a private instance from Azure, it is like 2-3 times the cost , but your data is safe as you have a contract with MS to keep it safe and private, with large financial penalties if it isn’t.
Also, running LLM locally isn’t 0 cost, depending on the electricity price of your area. GPU consume a LOT of power. The 4090 is like 460 watts.
Azure allows to opt out of content logging for their OpenAI services. You need to apply for this option. This is what most larger companies, like my employer, are doing.