this post was submitted on 04 Dec 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

So I have collected a very high quality and large medical QA dataset that I want to use to create a medical knowledge retrieval app. I have heard LLMs perform much better when it is fine tuned on the same data on which RAG is performed. Is it true? And is it worth the hassle of fine-tuning or am I good with pure RAG?

top 1 comments
sorted by: hot top controversial new old
[–] fediverser@alien.top 1 points 11 months ago

This post is an automated archive from a submission made on /r/LocalLLaMA, powered by Fediverser software running on alien.top. Responses to this submission will not be seen by the original author until they claim ownership of their alien.top account. Please consider reaching out to them let them know about this post and help them migrate to Lemmy.

Lemmy users: you are still very much encouraged to participate in the discussion. There are still many other subscribers on !localllama@poweruser.forum that can benefit from your contribution and join in the conversation.

Reddit users: you can also join the fediverse right away by getting by visiting https://portal.alien.top. If you are looking for a Reddit alternative made for and by an independent community, check out Fediverser.