this post was submitted on 08 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Recently, I've been working on some projects for fun, trying out some things I hadn't worked with before, such as profiling.

But after profiling my code, I found out that my average GPU activity is around 50%. Apparently, the code frequently hangs for a few hundred milliseconds on the dataloader process. I've tried a few things in the dataloader: increasing/decreasing the number of workers, setting pin-memory to true or false, but neither seems to really matter. I have an NVME drive, so the disk is not the problem either. I've concluded that the bottleneck must be the CPU.

Now, I've read that pre-processing the data might help, so that the dataloader doesn't have to decode the images, for example, but I don't really know how to go about this. I have around 2TB of NVME storage, and I've got a couple datasets on the disk (ImageNet and INaturalist are the two biggest ones), so I don't suppose I'll be able to store them on the disk uncompressed.

Is there anything I can do to lighten the load on the CPU during training so that I can take advantage of the 50% of the GPU that I'm not using at the moment?

you are viewing a single comment's thread
view the rest of the comments
[–] AtharvBhat@alien.top 1 points 1 year ago

I've dealt with similar issues in my own projects.

A couple of pointers :-

Use Image formats that are fast to decode for example BMP ( you can try converting all your images to BMP before you start training ) This will increase their size on disk but should reduce the CPU load. If you are doing any complex preprocessing on large images in your dataset class, try preprocessing images first and storing them to disk and loading those directly

These are just some general suggestions. It'd be more helpful if we knew more about your task so that we can offer more directed suggestions :)