this post was submitted on 24 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I tried to apply a lot of prompting techniques in 7b and 13b models. And no matter how hard I tried, there was barely any improvement.

you are viewing a single comment's thread
view the rest of the comments
[–] Inevitable-Highway85@alien.top 1 points 11 months ago (1 children)

Models coming from Mistral and small models fine tuned in qa or instructions, need specific instructions in question format. For example: Prompt 1. ,"Extract the name of the actor mentioned in the article below" This prompt may not have the spected results. Now if you change it to: Prompt: What's the name of the actor actor mentioned in the article below ? You'll get better results. So yes, prompt engeniring it's important I small models.

[–] Loyal247@alien.top 1 points 11 months ago (1 children)

I wouldn't really consider rephrasing a question prompt engineering. but yes the way in which the model was trained will dictate the way u ask it questions and if u don't follow the proper format the less likely u will get a response that u want.

[–] Inevitable-Highway85@alien.top 1 points 11 months ago

well I believe that at its core is the process where you guide generative artificial intelligence (generative AI) solutions to generate desired output. So iteration over rephrase, prompt versioning, and of course using the proper format is essential. I'm testing some new software architectures using 3 Instances of Mistral with different tasks using output from one as input for the other and boy, Mistral is amazing.