pryxon_36

joined 10 months ago
 

Hello,

I am currently working on my thesis, which focuses on elucidating fake news through humor. My objective is to fine-tune a transformer-based model for this purpose. I have a question:

If I initially fine-tune the model to generate humor (using a prompt like "tell me a joke" and providing an expected response in the form of a joke), and then fine-tune it again (using a prompt like "explain why this news is fake" and providing the expected response), will the final model be capable of responding effectively to a prompt like "explain why this is fake in a funny manner" if I proceed with this approach?

Or should I fine-tune the model specifically for the prompt "explain why this is fake in a funny way" and provide it with the expected response in a similar manner?

Has anyone come across a "problem" like this and if so what do u think its the best approach ?

Thank you for the help!