this post was submitted on 30 Oct 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

https://arxiv.org/abs/2310.17680

Ok, technically a tiny language model for now:

Imagine a developer who can only change their last line of code, how often would they have to start writing a function from scratch before it is correct? Auto-regressive models for code generation from natural language have a similar limitation: they do not easily allow reconsidering earlier tokens generated. We introduce CodeFusion, a pre-trained diffusion code generation model that addresses this limitation by iteratively denoising a complete program conditioned on the encoded natural language. We evaluate CodeFusion on the task of natural language to code generation for Bash, Python, and Microsoft Excel conditional formatting (CF) rules. Experiments show that CodeFusion (75M parameters) performs on par with state-of-the-art auto-regressive systems (350M-175B parameters) in top-1 accuracy and outperforms them in top-3 and top-5 accuracy due to its better balance in diversity versus quality.

And only for code. And seems it is much slower. But looks extremely interesting as "proof of concept".

I think that instead of a lot of "denoising" steps to generate text from gibberish, a dual-model system that takes a typical autoregressive input and than runs a few "denoising" steps to look for errors and inconsistencies might be best of both worlds, instead of typical methods of increasing model output quality like progressive refinement that require rewriting entire text token-by-token several times...

you are viewing a single comment's thread
view the rest of the comments

I love this approach! Feels like a diffusion model would work Perfect with code! Now Im praying this Model will play nicely with C#!