this post was submitted on 13 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Was reading through the original DALL-E paper (https://arxiv.org/pdf/2102.12092.pdf) and found their appendix A.3 pretty interesting. The motivation behind the logit Laplace loss totally makes sense, but if you plot the negative log of the PDF of their logit Laplace function, it's not strictly positive as a function of x. Does that mean the authors are allowing for a potentially negative loss with this reformulation?

top 3 comments
sorted by: hot top controversial new old

Found 2 relevant code implementations for "Zero-Shot Text-to-Image Generation".

Ask the author(s) a question about the paper or code.

If you have code to share with the community, please add it here ๐Ÿ˜Š๐Ÿ™

--

To opt out from receiving code links, DM me.

[โ€“] mrfox321@alien.top 1 points 1 year ago (1 children)

Objective functions can be negative. What is the issue?

[โ€“] clywac2@alien.top 1 points 1 year ago

it's just every other assumed prior (Gaussian, Laplacian) for the reconstruction error gives strictly nonnegative errors when the negative log is taken, so I expected this one to as well