You'll have to go multi-modal. The best is now fuyu but that's not commercially usable.
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
I found Blip: https://replicate.com/salesforce/blip?input=form&output=preview
But that's not exactly what i'm looking for. It does image captioning very well.
Like in the their example: "a woman sitting on the beach with a dog".
But i need a list of objects and "things" like : dog, woman, beach, wave, shirt...etc.
interesting.. is blip commercially usable? I read that it is, but is this correct for the weights in their entirety?
CogVLM is supposed to support this with prompts like "Can you provide a description of the image and include the coordinates [[x0,y0,x1,y1]] for each mentioned object?"
However I couldn't get it to work properly, it would just hallucinate.
If you want to give it a shot here are the official visual QA prompts
I'm surprised there is not more options....
As there is LLMs for almost everything!