well I believe that at its core is the process where you guide generative artificial intelligence (generative AI) solutions to generate desired output. So iteration over rephrase, prompt versioning, and of course using the proper format is essential. I'm testing some new software architectures using 3 Instances of Mistral with different tasks using output from one as input for the other and boy, Mistral is amazing.
Inevitable-Highway85
joined 1 year ago
Models coming from Mistral and small models fine tuned in qa or instructions, need specific instructions in question format. For example: Prompt 1. ,"Extract the name of the actor mentioned in the article below" This prompt may not have the spected results. Now if you change it to: Prompt: What's the name of the actor actor mentioned in the article below ? You'll get better results. So yes, prompt engeniring it's important I small models.
Mistral Instruct for overall
Temperature , top_p, and penalty. Take notes as you tune them. Take a raw model , get the dataset structure and fine tune them. I have a 7b model that can comunicarte almost exactly like me Using the recipe above.
I recommend the courses from learn.deeplearning.ai, I learned the basics of rag there.