this post was submitted on 31 Oct 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

i have some dashcam footage from my car and want to see how a model could embed the images in an unsupervised (or self-supervised) way so i dont have to label everything.

like if scenarios that are semantically similar, but different in pixel space (pulling out the driveway in the day versus pulling in at night) could be clustered close-ish together in latent space so that i could label fewer images and have the model get the other using something like k-nearest or whatever.

i am starting off with just frame level before i try to tackle videos as a sequence of images (will probably lose interest by that point, so want to get images dealt with first). i looked in to VAEs and tried training one from scratch on my data but i dont have enough compute power for that.

does anyone here have any ideas about this? any pretrained off the shelf models that i could use for this? any leads for a literature survey?

top 2 comments
sorted by: hot top controversial new old
[โ€“] CatalyzeX_code_bot@alien.top 1 points 10 months ago

Found 2 relevant code implementations for "An Introduction to Variational Autoencoders".

Ask the author(s) a question about the paper or code.

If you have code to share with the community, please add it here ๐Ÿ˜Š๐Ÿ™

--

To opt out from receiving code links, DM me.

[โ€“] seiqooq@alien.top 1 points 10 months ago

Iโ€™ve personally used SimCLR with some success but unless you doctor the embedding scheme, similarity will primarily be a function of pixel likeness