this post was submitted on 17 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Hi everyone, quick question. I've been in the Deep Learning world for about 6 months. Since then, I've been constantly studying and developing models for different datasets and stuff. Here's the thing, I have a MacbookPro with an M1 Pro, which ngl runs kinda slow. Most of the time, it only lets me do simple models in a relatively normal pase and more complex ones but slow (for example a DNN for the MNIST dataset using keras layers). That said, when I've made more complex models, for example going to the CIFAR-10 dataset, my computer runs very slow when trying to process one epoch: about 8 minutes per epoch in a CNN model with 54,881,674 parameters (14 layers total) training in the CIFAR-10 dataset. The point is, do you guys think I'm being too dramatic? Or is it a good idea for me to upgrade to a better computer to process more complex models faster? Also, if it is better to make the upgrade, what sort of computer do you recommend? Windows, Mac, GPU recommendations, or whatever feedback you can give me.

top 2 comments
sorted by: hot top controversial new old
[โ€“] choHZ@alien.top 1 points 11 months ago

Just get Colab Pro+ already. For hobbyist exploration, there really is little-to-zero need to run anything local when you can just pop up a notebook with cloud GPU access, especially when your local environment is not even X86+Nvidia.

Colab Pro+ costs $50/month and grants you 500 compute units. 40G A100 rate is like 13-ish unit/hr, and V100 is like 6-ish unit or something; there is option to get T4 with even lower cost if you just want CUDA. Get a local editor - google drive - colab workflow and you are good.

[โ€“] Alittlebitanalytical@alien.top 1 points 11 months ago

If you have a spinning HD to do the calculations, swap it out with an M.2 or even an SSD. Delve into using a ram drive, use the ram for as much of the calculations instead of the HD or cpu. Or use VRAM, if you can afford a good video card or access to one. Or use a server (or a collection of them) to do the heavy lifting. Good luck