this post was submitted on 24 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I agree on testing with WireShark, great suggestion! but how can you know it doesn't know anything about its environment? This LLM is a 4GB file, and network scan only needs a few lines of code to return your entire system network configuration. How does it know how to automatically run and download updates, store them and install? Why are there updates in the first place? Any time you get something for free, chances are you give away your data in return. Nothing is free
Though HF models can contain code to be executed, this is usually heavily scrutinized by the community. Plus, not all models are equally flexible.
For example the GGUF format are essentially all weights with no executable code. That said, it isn't impossible that there is some exploit that results in remote code execution, so the risk isn't 0.
That said, it is important to consider though that the people releasing these models, be it the original authors or The Bloke who quantizes models risk their grants and research funding if they decide to act malicously.
That's up to GPT4All, which is essentially just a wrapper around llama.cpp, you are conflating a Local LLM with the frontend used to interact with it.