this post was submitted on 19 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Hello,

I am currently working on my thesis, which focuses on elucidating fake news through humor. My objective is to fine-tune a transformer-based model for this purpose. I have a question:

If I initially fine-tune the model to generate humor (using a prompt like "tell me a joke" and providing an expected response in the form of a joke), and then fine-tune it again (using a prompt like "explain why this news is fake" and providing the expected response), will the final model be capable of responding effectively to a prompt like "explain why this is fake in a funny manner" if I proceed with this approach?

Or should I fine-tune the model specifically for the prompt "explain why this is fake in a funny way" and provide it with the expected response in a similar manner?

Has anyone come across a "problem" like this and if so what do u think its the best approach ?

Thank you for the help!

you are viewing a single comment's thread
view the rest of the comments
[–] RegisteredJustToSay@alien.top 1 points 1 year ago

There's some good recent papers on how to tackle this. My favourite paper on the topic is probably here: Robust fine-tuning of zero-shot models - arXiv https://arxiv.org/pdf/2109.01903

But tl;dr: a weighted average of fine-tuned weights and original weights through a manually chosen weight tend to greatly mitigate this problem. I was surprised the paper didn't get more attention when it came out, but oh well.