this post was submitted on 23 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

I'm a machine learning engineer and researcher. I got fed up with how difficult it is to understand why neural networks behave the way they do, so i wrote a library to help with it.

Comgra (computation graph analysis) is a library you can use with pytorch to extract all the tensor data you care about and visualize it graphically in a browser.

This allows for a much more detailed analysis of what is happening than the usual approach of using tensorboard. You can go investigate tensors as training proceeds, drill down into individual neurons, inspect single data sets that are of special interest to you, track gradients, compare statistics between different training runs, and more.

This tool has saved me a ton of time in my research by letting me check my hypotheses much more quickly than normal and by helping me understand how the different parts of my network really interact.

I hope this tool can save other people just as much time as it did me. I'm also open for suggestions on how to improve it further: Since I'm already gathering and visualizing a lot of network information, adding more automated analysis would not be much extra work.

top 10 comments
sorted by: hot top controversial new old
[–] ilos-vigil@alien.top 1 points 11 months ago (1 children)

This is great tool. Have you checked how much overhead used by your library?

[–] Smart-Emu5581@alien.top 1 points 11 months ago

Yes. The overhead used depends on how often you make it store the data and in how much detail. Both of this is configurable. I find the overhead to be negligible in practice.

[–] Smallpaul@alien.top 1 points 11 months ago (1 children)

Is there a subreddit for Mechanistic Interpretability? Should there be?

[–] DigThatData@alien.top 1 points 11 months ago (1 children)

this isn't mechanistic interpretability, it's debugging.

[–] Smart-Emu5581@alien.top 1 points 11 months ago (1 children)

Mechanistic Interpretability

It's primarily intended for debugging, but it can also help with mechanistic interpretability. Being able to see the internals of your network for any input and at different stages of training can help a lot with understanding what's going on.

[–] currentscurrents@alien.top 1 points 11 months ago

IMO interpretability and debugging are inherently related. The more you know about how the network works, the easier it will be to debug it.

[–] Frankenstein_400@alien.top 1 points 11 months ago (1 children)
[–] TotesMessenger@alien.top 1 points 11 months ago

I'm a bot, bleep, bloop. Someone has linked to this thread from another place on reddit:

 ^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^(Info ^/ ^Contact)

[–] Xorlium@alien.top 1 points 11 months ago

Looks very cool. Congrats.