this post was submitted on 10 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 10 months ago
MODERATORS
 

Paper: https://arxiv.org/abs/2311.03079

GitHub: https://github.com/THUDM/CogVLM

Abstract:

We introduce CogVLM, a powerful open-source visual language foundation model. Different from the popular shallow alignment method which maps image features into the input space of language model, CogVLM bridges the gap between the frozen pretrained language model and image encoder by a trainable visual expert module in the attention and FFN layers. As a result, CogVLM enables deep fusion of vision language features without sacrificing any performance on NLP tasks. CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC, and ranks the 2nd on VQAv2, OKVQA, TextVQA, COCO captioning, etc., surpassing or matching PaLI-X 55B. Codes and checkpoints are available at this https URL.

https://preview.redd.it/yw0lr38gegzb1.png?width=1096&format=png&auto=webp&s=23361a84319c0fcbf1e980ca74dea26cb8be325b

top 1 comments
sorted by: hot top controversial new old
[โ€“] CatalyzeX_code_bot@alien.top 1 points 10 months ago

Found 1 relevant code implementation for "CogVLM: Visual Expert for Pretrained Language Models".

If you have code to share with the community, please add it here ๐Ÿ˜Š๐Ÿ™

--

To opt out from receiving code links, DM me.