this post was submitted on 18 Apr 2025
5 points (63.2% liked)

Asklemmy

47512 readers
1215 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

What service can I use to ask questions about my database of blog posts? "Tell me everything you know about Grideon, my fictional character" etc

all 14 comments
sorted by: hot top controversial new old
[–] hedgehog@ttrpg.network 14 points 1 day ago (3 children)

Retrieval-Augmented Generation (RAG) is probably the tech you’d want. It basically involves a knowledge library being built from the documents you upload, which is then indexed when you ask questions.

NotebookLM by Google is an off the shelf tool that is specialized in this, but you can upload documents to ChatGPT, Copilot, Claude, etc., and get the same benefit.

If you self hosted, Open WebUI with Ollama supports this, but far from the only one.

[–] stinky@redlemmy.com 2 points 18 hours ago

Thanks, I got NotebookLM working pretty quickly. I think RAG is what I'm after. I'll continue to look.

Dunno why this is downvoted because RAG is the correct answer. Fine tuning/training is not the tool for this job. RAG is.

[–] Danitos@reddthat.com 4 points 1 day ago (1 children)

OP can also use an embedding model and work with vectorial databases for the RAG.

I use Milvus (vector DB engine; open source, can be self hosted) and OpenAI's text-embedding-small-3 for the embedding (extreeeemely cheap). There's also some very good open weights embed modelsln HuggingFace.

[–] scrubbles@poptalk.scrubbles.tech 4 points 1 day ago (1 children)

I understand conceptually how these work, but I have a hard time of how to get started . I have the model, I know embeddings exist and what they are, and rags, and vector dbs, and then I have my SQL DB. I just don't know what the steps are.

Do you have any guides you recommend?

[–] Danitos@reddthat.com 2 points 1 day ago (1 children)

Milvus documentation has a nice example: link. After this, you just need to use a persistent Milvus DB, instead of the ephimeral one in the documentation.

Let me know if you have further questions.

[–] scrubbles@poptalk.scrubbles.tech 2 points 22 hours ago (1 children)

That's a great start! A lot of it depends on OpenAI, is there any guide you know of that lets me run completely locally? I use TabbyAPI for most of my inference, and happy to run anything else for training

[–] Danitos@reddthat.com 1 points 19 hours ago* (last edited 19 hours ago) (1 children)

It would work the same way, you would just need to connect with your local model. For example, change the code to find the embeddings with your local model, and store that in Milvus. After that, do the inference calling your local model.

I've not used inference with local API, can't help with that, but for embeddings, I used this model and it worked quite fast, plus was a top2 model in Hugging Face. Leaderboard. Model.

I didn't do any training, just simple embed+interference.

Ah okay, I think that makes sense. Thanks for your input! I'll give it a whirl

[–] OsrsNeedsF2P@lemmy.ml -5 points 1 day ago

All the models will have token limits, especially if you're not paying for API access. You would have to tune a model based on the blog posts, but that's expensive, degrades model quality, and isn't easy to do.

Another thing you could do is have a model index the posts and then retrieve data based on search. The easiest way to do this would be download all the blog posts into a folder, then install cursor.com and open it on the folder. Cursor is for coding, but it will index your folder and then you can ask the model questions. You should be able to get this far with the free trial, but if you have a huge number of blog posts, it still won't work

[–] ocean@lemmy.selfhostcat.com -4 points 1 day ago (1 children)

Deepseek doesn’t seem to have a limit in my experience. In my experience it’s also way smarter! You can also just download it and run it

[–] OsrsNeedsF2P@lemmy.ml 6 points 1 day ago (1 children)

It will start hallucinating if you overfeed it

[–] Balthazar@lemmy.world 7 points 1 day ago

And don't feed it after midnight!