this post was submitted on 19 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I did some ratings on chatbot arena and i noticed one things. When an ai honestly said "i dont know that" or "i dont understand that" it was always better received by me and felt kinda smarter.

Does some dataset or lora train on that? Or is "knowing about not knowing" too hard to achieve?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] __SlimeQ__@alien.top 1 points 11 months ago (2 children)

my personal lora does this just because it was trained on actual human conversations. it's super unnatural for people to try answering just any off the wall question, most people will just go like "lmao" or "idk, wtf" and if you methodically strip that from the data (like most instruct datasets do) then it makes the bots act weird as hell

[โ€“] itsmeabdullah@alien.top 1 points 11 months ago

Do you have experience with training Loras off private conversational data? If so can I DM you, I have a huge favour to ask with regards to training, if you don't mind.

load more comments (1 replies)