this post was submitted on 23 Nov 2023
1 points (100.0% liked)

LocalLLaMA

4 readers
4 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 2 years ago
MODERATORS
 

Code: https://github.com/hao-ai-lab/LookaheadDecoding

Blog post: https://lmsys.org/blog/2023-11-21-lookahead-decoding/

Description:

We introduce lookahead decoding, a new, exact, and parallel decoding algorithm to accelerate LLM inference. Lookahead decoding breaks the sequential dependency in autoregressive decoding by concurrently extracting and verifying n-grams directly with the LLM, utilizing the Jacobi iteration method. Lookahead decoding functions without the need for a draft model or a data store. It linearly decreases the number of decoding steps directly correlating with the log(FLOPs) used per decoding step. Below is a demo of lookahead decoding accelerating LLaMa-2-Chat 7B generation:

https://i.redd.it/c3q2lr71z22c1.gif

top 3 comments
sorted by: hot top controversial new old
[–] Revolutionalredstone@alien.top 1 points 2 years ago (2 children)

what does the white and blue text mean in the video?

[–] knownboyofno@alien.top 1 points 2 years ago

White is the normal generation while the blue is the look ahead.

[–] _Lee_B_@alien.top 1 points 2 years ago

The blue text is what this method improved the speed of (I think by parallelizing the inference similarly to CPU pipelining), and so what contributed to the overall text being produced more quickly.