this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

11 readers
4 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 2 years ago
MODERATORS
 

I have an 8GB M1 MacBook Air and 16GB MBP (that I haven't turned in for repair) that I'd like to run an LLM on, to ask questions and get answers from notes in my Obsidian vault (100s of markdown files). I've been lurking this subreddit but I'm not sure if I could run LLMs <7B with 1-4GB of RAM or if the LLM(s) would be too quality.

you are viewing a single comment's thread
view the rest of the comments
[–] artisticMink@alien.top 1 points 2 years ago

Quick answer: No.

Longer answer: It depends. Passing it as context won't work as it's too much data among other things. So you could use a model that builds SQL to query your database according to input and either output it directly or have another model (quantized 7B) interpret it.

But generally, i see the idea of the 'AI Assistant' come up here regularly, and the question is do you want to rely on a LLM that just 'makes things up' when accessing your notes. I guess that depends on how important the subject is.