- Yes, because the relevant text search is difficult. Different chunking strategies may lead to different accuracy, and there’re many papers discussing new methods to improve it. Also the retrieved result should be suitable for the model to get best performance.
InevitablePressure63
joined 11 months ago
First I need to classify that in RAG the input is the original natural language documents rather than vectors. So Embeddings is just a method to achieve retrieval, you can use any method you like, including string based search, or mix multiple approaches. Anyhow it is just a method to enrich your prompt with relevant context, it only changes the prompt instead of the model architecture.