this post was submitted on 30 Oct 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

https://arxiv.org/abs/2310.17680

Ok, technically a tiny language model for now:

Imagine a developer who can only change their last line of code, how often would they have to start writing a function from scratch before it is correct? Auto-regressive models for code generation from natural language have a similar limitation: they do not easily allow reconsidering earlier tokens generated. We introduce CodeFusion, a pre-trained diffusion code generation model that addresses this limitation by iteratively denoising a complete program conditioned on the encoded natural language. We evaluate CodeFusion on the task of natural language to code generation for Bash, Python, and Microsoft Excel conditional formatting (CF) rules. Experiments show that CodeFusion (75M parameters) performs on par with state-of-the-art auto-regressive systems (350M-175B parameters) in top-1 accuracy and outperforms them in top-3 and top-5 accuracy due to its better balance in diversity versus quality.

And only for code. And seems it is much slower. But looks extremely interesting as "proof of concept".

I think that instead of a lot of "denoising" steps to generate text from gibberish, a dual-model system that takes a typical autoregressive input and than runs a few "denoising" steps to look for errors and inconsistencies might be best of both worlds, instead of typical methods of increasing model output quality like progressive refinement that require rewriting entire text token-by-token several times...

you are viewing a single comment's thread
view the rest of the comments
[–] saintshing@alien.top 1 points 1 year ago (1 children)

Instead of using gaussian noise(in the latent space), I wonder if we can introduce noise by randomly inserting/deleting/replacing/swaping words. Cant we train a BERT model to predict the original text from a noise-added text?

[–] mushytaco@alien.top 1 points 1 year ago

This has been explored a little for nlp and even audio tasks (using acoustic tokens)!

https://aclanthology.org/2022.findings-acl.25/ and https://arxiv.org/abs/2307.04686 both come to mind

Feel like diffusion and iterative mask/predict are pretty conceptually similar—my hunch is that diffusion might have a higher ceiling by being able to precisely traverse a continuous space, but operating on discrete tokens probably could converge to something semantically valid w fewer iterations.

Also Bert is trained w MLM which technically is predicting the og text from a “noisy” version, but noise is only introduced via masking, and it is limited to a single forward pass, not iterative!