semicausal

joined 10 months ago
 

We LOVE the Netron library that Lutz Roeder created: https://github.com/lutzroeder/netron

We wanted to explore what it felt like to be able to render Netron visualizations for ML models hosted in GitHub so we built a Netron integration: https://about.xethub.com/blog/visualizing-ml-models-github-netron

Netron focuses on viewing 1 model file at a time but we also incorporated before-and-after model visualizations in pull requests:

https://assets-global.website-files.com/6474aea6101c81b742144dd2/65689c07b7f2b060a925097c_github_pr2.png

[–] semicausal@alien.top 1 points 9 months ago (1 children)

In my experience, the lower you go....the model:

- hallucinates more (one time I asked Llama2 what made the sky blue and it freaked out and generated thousands of similar questions line by line)

- is more likely to give you an inaccurate response when it doesn't hallucinate

- is significantly more unreliable and non-deterministic (seriously, providing the same prompt can cause different answers!)

At the bottom of this post, I compare the 2-bit and 8-bit extreme ends of Code Llama Instruct model with the same prompt and you can see how it played out: https://about.xethub.com/blog/comparing-code-llama-models-locally-macbook

[–] semicausal@alien.top 1 points 10 months ago

Good questions:

- DVC: no new commands to learn (we extend Git) and you don't need S3.

- Git LFS: we inject useful views into your large files inside GitHub itself (in commits and PR's) unlike Git LFS (e.g. check this model diff: https://youtu.be/lAyymscJUvI?t=87), we scale to much larger sizes (100 terabytes), and we deduplicate better (Git LFS considers a 1 line change to a large CSV file a new entire file, our technique captures the differences)

 

Hey r/MachineLearning!

Last year, u/rajatarya showcased how we scaled Git to handle large datasets. One piece of feedback we kept getting is that people didn't want to move their source code over to XetHub.

So we built a GitHub app & integration that lets you continue storing code in GitHub while XetHub handles the large datasets & models.

https://about.xethub.com/blog/xetdata-scale-github-repos-100-tb

We've enjoyed using it to host open source LLM's like Llama2 and Mistral with our finetuning code side-by-side.

The whole thing is in beta so we're eager for any feedback you have to offer :)

[–] semicausal@alien.top 1 points 10 months ago

"Bad for the environment" is a bit too vague IMO to take meaningful action and drive change. Some products use machine learning to detect illegal logging or capture useful environmental data. In those cases, ML is being used to HELP the environment.

So I would zoom in more on the specific issues and externalities you want to resolve.

One simple shortcut is to electrify your entire setup and then ensure that only renewable energy is providing your electricity.

[–] semicausal@alien.top 1 points 10 months ago

In my experience, it honestly depends on what you're trying to have the models learn and the task at hand.

- Spend lots of time cleaning up your data and doing feature engineering. Regulated industries like insurance spend significantly more time in feature engineering than tuning fancy models, for example.

- I would recommend trying regression and random forest models first, or even xgboost