this post was submitted on 28 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Hi, does anyone know of any (peer-reviewed) articles testing performance when giving LLMs a role? It's something most of us do in prompts and it's somewhat logical that introducing such a parameter would increase likelihood of desired output, but has anyone actually tested it in a cite-able article?

I'm thinking of the old, "You are a software engineer with years of experience in coding .html, .json ... " etc.

top 5 comments
sorted by: hot top controversial new old
[–] phree_radical@alien.top 1 points 9 months ago (3 children)
[–] ExtensionFeedback426@alien.top 1 points 9 months ago

this is super helpful!

[–] MFHau@alien.top 1 points 9 months ago

Cheers, that's exactly what I was looking for!

[–] SomeOddCodeGuy@alien.top 1 points 9 months ago

This little bit right here is very important if you want to do work regularly with an AI

Specifying a role when prompting can effectively improve the performance of LLMs by at least 20% compared with the control prompt, where no context is given. Such a result suggests that adding a social role in the prompt could benefit LLMs by a large margin.

I remembered seeing an article about this a few months back, which lead to my working on an Assistant prompt, and it's been hugely helpful.

I imagine this comes down to how Generative AI works under the hood. It ingested tons of books, tutorials, posts, etc from people who identified as certain things. Telling it to also identify as that thing could open a lot of pieces of information to it that it wouldn't otherwise be looking at.

I always recommend that folks set up roles for their AI when working with it, because the results I've personally seen have been miles better when you do.

[–] Immortal_Tec@alien.top 1 points 9 months ago