this post was submitted on 26 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

The only time that the query and key matrices are used is to compute the attention scores. That is $v_i^T \cdot W_q^T W_k v_j$ But what is used is the matrix $W_q^T W_k$. Why not just replace $W_q^T W_k$ with a single matrix $W_{qv}$, and learn the matrix that is the product of W_q^T W_k instead of the matrices themselves? How does it help to have two matrices instead of one? And if it helps, why is that not done when applying matrices between neuron layers?

Chatgpt tells me that the reason is that it allows the model to learn a different representation for the query and key. But because they are just dotted together, it seems to me that you can just use the original embedding as the query with no loss of generality.

you are viewing a single comment's thread
view the rest of the comments
[–] InterstitialLove@alien.top 1 points 9 months ago

These answers seem weird to me. Am I misunderstanding? Here's the obvious-seeming answer to me:

You need two different matrices because you need an attention coefficient for every single pair of vectors.

If there are n tokens, then for the n'th token you need n-1 different attention coefficients (one for each token it attends). For the n-1'th token, you need n-2 different coefficients, and so on, until the 2nd vector which needs only one coefficient, and the first vector which needs zero (it can't attend anything).

That's ~n^2 coefficients in total. If you compute key and query vectors, then you only need 2n different vectors (one key and one query for each of the n vectors). If the key/query vectors are d-dimensional that's 2dn numbers, which is still smaller than n^2 if the context size is bigger than the key/query dimension

So using separate vectors is more efficient and more scalable.

The other answers on this thread seem different, which is surprising to me since this answer feels very straightforward. If I'm missing something, I'd love an explanation