this post was submitted on 28 Oct 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] HighFreqAsuka@alien.top 1 points 1 year ago (1 children)

LoRA fine tuning is an incredibly simple idea. For each matrix you want to fine-tune, introduce a low rank matrix ΔW = BA where the inner dimension is r << d, and compute (W + ΔW)x. Freeze all pretrained parameters and only update B and A. B is initialized to 0 so that the initial model is equal to the pretrained model. After training, you can also write V = W + ΔW to preserve latency.

Saved you a click.

[–] residentmouse@alien.top 1 points 1 year ago (1 children)

Well now I feel almost obligated to click - is the part of the title "deep dive" completely misleading or is the post really just a LoRA explanation?

[–] FallMindless3563@alien.top 1 points 1 year ago

I’d like to think we dove deep, but let me know!