Intuitively diffusion-based models for code generation make a lot of sense, glad to see people spending time on it. Really curious to see what can come out of it, even if it's an intermediary step to be used in conjunction with LLMs (i.e. the diffusion model works with pseudocode and LLM translates pseudocode into actual language-specific implementations)
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
How does it make a lot of sense, please explain?
I love this approach! Feels like a diffusion model would work Perfect with code! Now Im praying this Model will play nicely with C#!
Instead of using gaussian noise(in the latent space), I wonder if we can introduce noise by randomly inserting/deleting/replacing/swaping words. Cant we train a BERT model to predict the original text from a noise-added text?
This has been explored a little for nlp and even audio tasks (using acoustic tokens)!
https://aclanthology.org/2022.findings-acl.25/ and https://arxiv.org/abs/2307.04686 both come to mind
Feel like diffusion and iterative mask/predict are pretty conceptually similar—my hunch is that diffusion might have a higher ceiling by being able to precisely traverse a continuous space, but operating on discrete tokens probably could converge to something semantically valid w fewer iterations.
Also Bert is trained w MLM which technically is predicting the og text from a “noisy” version, but noise is only introduced via masking, and it is limited to a single forward pass, not iterative!