um-xpto

joined 9 months ago
[–] um-xpto@alien.top 1 points 9 months ago

Thanks. Guidance seems a good fit I'll start looking for more info.

[–] um-xpto@alien.top 1 points 9 months ago (3 children)

Nice! Thank you for your work.

Regarding the video.

Q1) minute 14:14 Finetuning into an Assistant, when you have multiple tasks / datasets with diverse outputs how is training performed ? Are all datasets combined in a single training ? Or Is finetuning done over a previous finetuning ? Or the question is parsed and sent to a specific model ?

Q2) minute 27:43 Tool Use (Browser, Calculator, etc. ) Anyone has links for similar implementations for llama and how is done or what kind of tech/frameworks are used ?