this post was submitted on 27 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

What is the best approach to achieve multi modality using a instruct fine tuned model?

top 2 comments
sorted by: hot top controversial new old
[โ€“] sshh12@alien.top 1 points 11 months ago

The best way is still open to some research but my understanding is that current open source SOTA is ShareGPT4V uses a high quality dataset based on GPT4V + I believe a LLaVA-like architecture. This works by essentially encoding the other domain as text embeddings that are understood by the LLM.

If you are interested I have a library for more easily training these on custom modalities: https://github.com/sshh12/multi_token (uses basically the same idea from the LLaVA 1.5 paper)

[โ€“] DeliciousFriedPanda@alien.top 1 points 11 months ago

The field is broad, with no easy answer and many nuances, you could fill countless PostDocs and Phds with work here.

Judging by the way you phrased your question, very generically and without essentially any detail, I'm going to take a wild guess and say that you're either a beginner in ML ot haven't read/studied anything on the subject.

I'd encourage you to start by reading recent and not-so-recent papers dealing with inherently multimodal tasks, like scene text recognition, VQA, and the like. The big problem in the field is that the models will, generally speaking, try to overfit on the modality that's mostly information-dense between V and L for your task. In current SoTa methods, the best way to mitigate this seems to be fusing the two modalities via a gradual mechanism, for example the gated tanh attention of Flamingo.

Happy reading!