this post was submitted on 09 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Best practical method of knowledge distillation available?

TL;DR: Knowledge distillation generally performs worse than traning model from scratch on data from what I've seen online. Is there a method of KD where this doesn't happen and I get close to performance of a model if it was trained from scratch?

So I've recently been interested in make DL models more useful for everyday tasks. And considering their size trying to run these models on consumer devices without much loss in quality but rn from what I've seen, this just feels like trying fit an elephant into his pants.

Basically it tears everytime I try. I found quantization to be cool but I need to reduce its size even more tbh. So I found knowledge distillation. But from what I've seen, though theoretically it is fantastic. Practically knowlege distillation sucks. And is probably worse than just straight up traning the model from scratch on the dataset.

So is there a used and proven method of knowledge distillation that I can use? Which will give me at least very close accuracy to a model trained from scratch on dataset?

you are viewing a single comment's thread
view the rest of the comments
[–] NoIdeaAbaout@alien.top 1 points 1 year ago (3 children)

Have you seen this article by Google?

https://arxiv.org/abs/2305.02301

https://blog.research.google/2023/09/distilling-step-by-step-outperforming.html

they claim that they were able to distill for reasoning task PaLM with T5 (2000 times difference in size) and the distilled T5 was outperforming PaLM

code is here:

https://github.com/google-research/distilling-step-by-step

[–] Xanta_Kross@alien.top 1 points 1 year ago (2 children)

They seem to have distilled knowledge from a larger and general model to a smaller and specialised model and outperform the larger model on single task. Thanks for the paper. I wonder if I can specialise it to a subset of the original tasks and then try to outperform the original model.

[–] NoIdeaAbaout@alien.top 1 points 1 year ago (1 children)

I think you can try a similar way for another task, for me, the approach can be generalized to different tasks

[–] Xanta_Kross@alien.top 1 points 1 year ago

for me the approach can be generalized to different tasks

Can you elaborate?