this post was submitted on 23 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Code: https://github.com/hao-ai-lab/LookaheadDecoding

Blog post: https://lmsys.org/blog/2023-11-21-lookahead-decoding/

Description:

We introduce lookahead decoding, a new, exact, and parallel decoding algorithm to accelerate LLM inference. Lookahead decoding breaks the sequential dependency in autoregressive decoding by concurrently extracting and verifying n-grams directly with the LLM, utilizing the Jacobi iteration method. Lookahead decoding functions without the need for a draft model or a data store. It linearly decreases the number of decoding steps directly correlating with the log(FLOPs) used per decoding step. Below is a demo of lookahead decoding accelerating LLaMa-2-Chat 7B generation:

https://i.redd.it/c3q2lr71z22c1.gif

you are viewing a single comment's thread
view the rest of the comments
[–] Revolutionalredstone@alien.top 1 points 10 months ago (2 children)

what does the white and blue text mean in the video?

[–] _Lee_B_@alien.top 1 points 10 months ago

The blue text is what this method improved the speed of (I think by parallelizing the inference similarly to CPU pipelining), and so what contributed to the overall text being produced more quickly.

[–] knownboyofno@alien.top 1 points 10 months ago

White is the normal generation while the blue is the look ahead.