this post was submitted on 28 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Hi, does anyone know of any (peer-reviewed) articles testing performance when giving LLMs a role? It's something most of us do in prompts and it's somewhat logical that introducing such a parameter would increase likelihood of desired output, but has anyone actually tested it in a cite-able article?

I'm thinking of the old, "You are a software engineer with years of experience in coding .html, .json ... " etc.

you are viewing a single comment's thread
view the rest of the comments
[–] phree_radical@alien.top 1 points 11 months ago (3 children)
[–] ExtensionFeedback426@alien.top 1 points 11 months ago

this is super helpful!

[–] SomeOddCodeGuy@alien.top 1 points 11 months ago

This little bit right here is very important if you want to do work regularly with an AI

Specifying a role when prompting can effectively improve the performance of LLMs by at least 20% compared with the control prompt, where no context is given. Such a result suggests that adding a social role in the prompt could benefit LLMs by a large margin.

I remembered seeing an article about this a few months back, which lead to my working on an Assistant prompt, and it's been hugely helpful.

I imagine this comes down to how Generative AI works under the hood. It ingested tons of books, tutorials, posts, etc from people who identified as certain things. Telling it to also identify as that thing could open a lot of pieces of information to it that it wouldn't otherwise be looking at.

I always recommend that folks set up roles for their AI when working with it, because the results I've personally seen have been miles better when you do.

[–] MFHau@alien.top 1 points 11 months ago

Cheers, that's exactly what I was looking for!