this post was submitted on 26 Nov 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
These answers seem weird to me. Am I misunderstanding? Here's the obvious-seeming answer to me:
You need two different matrices because you need an attention coefficient for every single pair of vectors.
If there are n tokens, then for the n'th token you need n-1 different attention coefficients (one for each token it attends). For the n-1'th token, you need n-2 different coefficients, and so on, until the 2nd vector which needs only one coefficient, and the first vector which needs zero (it can't attend anything).
That's ~n^2 coefficients in total. If you compute key and query vectors, then you only need 2n different vectors (one key and one query for each of the n vectors). If the key/query vectors are d-dimensional that's 2dn numbers, which is still smaller than n^2 if the context size is bigger than the key/query dimension
So using separate vectors is more efficient and more scalable.
The other answers on this thread seem different, which is surprising to me since this answer feels very straightforward. If I'm missing something, I'd love an explanation