this post was submitted on 30 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I'm a data scientist working in consulting (EU) and I would like to push my company to delve more into open-source LLMs over closed ones (OpenAI...).

I'm constantly amazed by the progress shown by open-sourced LLMs in this sub. The increasing efficiency of models coupled with the expanding open-source LLM ecosystem make these models more and more competitive.

I keep thinking that there's a market for custom LLMs, notably for private-sensitive companies like banks and insurances.

Any insights?

top 11 comments
sorted by: hot top controversial new old
[–] Crypt0Nihilist@alien.top 1 points 9 months ago

Yes, there are a few reasons. Companies might want them offline, fine-tuned to their industry/company and lower operating cost are the ones that come to mind.

[–] Glat0s@alien.top 1 points 9 months ago (1 children)

Yes. We are currently planning AI project and implementations in my company. We are handling sensitive data which requires us to do it local. And we want to setup a team and establish compentence and more experience in ML/LLMs. For our current use cases we don't need a "super intelligence" or "the best" LLM on the market ! RAG with smaller models is totally fine and sufficient for us.

[–] artelligence_consult@alien.top 1 points 9 months ago

. We are handling sensitive data which requires us to do it local

Likely this is incompetence on your end because - cough - OpenAI models are available under Azure and ok for even medical data. Takes some reading - but most "confidential, cannot use cloud" is "too stupid to read contracts". Ther are some edge cases, but heck, you can even get government use approved from Azure.

[–] a_beautiful_rhind@alien.top 1 points 9 months ago

Aren't there people selling such services to companies here? Implementing RAG, etc.

[–] Mother-Ad-2559@alien.top 1 points 9 months ago

Not only is there a market, I’d say a substantial portion of data departments across the economy will dedicate significant resource to this. Once generic models become common place, specialized ones will be where you edge out the competition

[–] AdTall6126@alien.top 1 points 9 months ago

Yes there is. Just to mention one of many scenarios:

Make a chatbot based on a LLM. Learn how to train an LLM from data from the customer. Voila! Your customer just aquired a new employee to handle customer support or internal support.

[–] LuluViBritannia@alien.top 1 points 9 months ago

The entire market will eventually use local LLMs, it's simply as that.

Online services are never an ideal solution for any business. It's not just about privacy.

- The owners can do whatever they want, so if they change settings or even simply shut down, you're screwed.

- Online services are a public traffick, so in case of high-density, it bottlenecks. Just like highroads, expect major slowdowns of your job if you use an online service that happens to be saturated. And slowdown means financiary loss.

- In case of internet issues, you're screwed.

- you have to pay for the service, which can get freaking expensive depending on how much you use it.

Local LLMs have none of these issues. And more than that:

- While general intelligence like ChatGPT or Claude is incredible, it will never be enough for every use case. There will always be cases where you need a more specialized alternative, even if less intelligent.

- The gap between the big ones and local LLMs is frankly not that high. I'm not going to say "they're as intelligent as ChatGPT!", but as a matter of fact, everything I was able to make with ChatGPT, I succeeded with a local LLM as well or even better. Analysing code and rewriting it/completing it? Managed with a 7B. Writing creative short stories? Easy even with a 7B.

- An online service has its own abilities, and the devs can update it but you have no guarantee they will. In the case of LLMs, context length matters so much! OpenAI did raise GPT's context length regularly, but what if they don't?

- Intelligence isn't the only point about an AI! A local LLM has its own default language style, and even the big ones are hard to steer away from it. ChatGPT's answers, for example, are very lengthy, constantly. With a Local LLM, it's easier to steer. You can even force it to adopt a certain format.

[–] Temporary-Size7310@alien.top 1 points 9 months ago

Absolutely (I'm a datascientist too), I own a fresh company in ML/DL/AI and it works pretty well.

There is so much room for creation even with current models (RAG pipeline on private data, prototyping locally and train or finetune on instances, real world application mixing others branch like CV, cinematography with SD, implementing liquid NN with LLM and so on)

[–] survivingmonday@alien.top 1 points 9 months ago

Local LLMs are a gold mine for IT consultants right now.

[–] noco-ai@alien.top 1 points 9 months ago

OpenAI is also so overwhelmed with demand that their API "limits" are a complete joke. I have a tier 4 account and have always hit server error issues when running any kind of single threaded application with their API well before hitting any of their published limits. I get more TPS from my local systems than I was ever able to get from the OpenAI API.

[–] TestPilot1980@alien.top 1 points 9 months ago

There is a market. Might be a small market, but there definitely is. Just like Linux!