this post was submitted on 26 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My perspective as a Fortune 500 IT solutions architect... why would I spend a few million dollars and a year of project time to build out local infrastructure that'll already be outdated by the time it's installed, when I can just hand my developers and data team permissions on Azure to be able to immediately access the same or better resources for a fraction of the cost? Scale is value, and cloud service providers will always have far greater scale.
That's probably the argument for all cloud architecture.
Long-term cost and risk might be persuasive, but that hasn't swayed IT managers thus far for non-LLM specific infrastructure. I am guessing it won't do much to sway future IT managers.
I'm also assuming Azure will let you get very custom with the LLMs you can train via their services.
This gives me something to think about.
It's 2023. What are you talking about? Where have you been?
Not everyone still uses the cloud, I still know people who run and manage physical clusters. This is mostly true for institutions such as hospitals, universities, etc. Using cloud solutions on these cases not just would add external dependencies but also much higher costs, for instance, handling and processing hundreds or thousands of terabytes of critical or scientific data.
This is the right answer. Unless LLM infrastructure and the model itself is your competitive advantage, time to market and simplicity is going to win every time. The good thing from the OpenAI fiasco is that abstraction layers are likely going to become more important.
Someone has not gone and sat down with the legal department.
Depending on your business, a LLM that does not tell stories, do porn, math, answer logic problems might be wasteful if you want to supercharge customer service by shoving in your own documentation. A thinner model might be cheaper to run at the scale of Fortune 500 CS than say anything azure is offering.
Without doing the math, and you need to have local hardware enough to do the math it's nearly impossible to make any sort of cost benefit analysis.
Cloud for scale is NOT value, cloud for scale is COST... Value is an asset and deprecation on said asset. If you aren't tracking your revenue vs expenses in cloud on a week over week basis, if you dont know your cloud costs per user, or customer (and those are going to be different depending on what you do and how you do it) there is zero corollary between cloud and value. The free money is gone, the belt is only gonna get tighter money needs to be in every metric...
Yeah might fit in the US, but not in Europe. Dependencies can lead to problems. Especially when there might be a conflict. I would not want to run important infrastructure that is dependent on US services only.
All major cloud providers have data centers in Europe.
You are right. But if you have a chinese customer for example, there might come up different problems like with NVIDIA and GPUs. Independency is key for a lot of players.