this post was submitted on 24 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I'm using Mistral OpenOrca and GPT4ALL who claim privacy. I opted out from sharing my conversations for privacy reasons but don't think this is actually true. See my conversation in the picture attached. Any feedback is appreciated and would like to hear from other people.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] damian6686@alien.top 1 points 11 months ago (1 children)

I agree on testing with WireShark, great suggestion! but how can you know it doesn't know anything about its environment? This LLM is a 4GB file, and network scan only needs a few lines of code to return your entire system network configuration. How does it know how to automatically run and download updates, store them and install? Why are there updates in the first place? Any time you get something for free, chances are you give away your data in return. Nothing is free

[โ€“] ----Val----@alien.top 1 points 11 months ago

but how can you know it doesn't know anything about its environment? This LLM is a 4GB file, and network scan only needs a few lines of code to return your entire system network configuration.

Though HF models can contain code to be executed, this is usually heavily scrutinized by the community. Plus, not all models are equally flexible.

For example the GGUF format are essentially all weights with no executable code. That said, it isn't impossible that there is some exploit that results in remote code execution, so the risk isn't 0.

That said, it is important to consider though that the people releasing these models, be it the original authors or The Bloke who quantizes models risk their grants and research funding if they decide to act malicously.

How does it know how to automatically run and download updates, store them and install?

That's up to GPT4All, which is essentially just a wrapper around llama.cpp, you are conflating a Local LLM with the frontend used to interact with it.