this post was submitted on 26 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

The only time that the query and key matrices are used is to compute the attention scores. That is $v_i^T \cdot W_q^T W_k v_j$ But what is used is the matrix $W_q^T W_k$. Why not just replace $W_q^T W_k$ with a single matrix $W_{qv}$, and learn the matrix that is the product of W_q^T W_k instead of the matrices themselves? How does it help to have two matrices instead of one? And if it helps, why is that not done when applying matrices between neuron layers?

Chatgpt tells me that the reason is that it allows the model to learn a different representation for the query and key. But because they are just dotted together, it seems to me that you can just use the original embedding as the query with no loss of generality.

you are viewing a single comment's thread
view the rest of the comments
[–] kei147@alien.top 1 points 9 months ago

Something to add to the other great answers here - you can say something similar about head-specific matrices W_V and W_O - they always act together as well. In fact, Anthropic recommends thinking of W_OW_V and W_Q^TW_K as basic primitives in their transformer interpretability framework: https://transformer-circuits.pub/2021/framework/index.html