this post was submitted on 28 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Hey everyone, I’m fairly new at the field and I’m working on a regression model in a huge dataset. We use pySpark for it, since the full size is around 150.000M rows.

Given this size, every little step of the process is painfully slow. Every count operation, display, etc.

I have of course tried to sample the dataset toa fraction of the original while I work on the development of the model (like df = df.sample(0.00001)) but it doesn’t really make much of an impact in time. I tried sampling it so that the reduced dataset would only be 1000 rows and a display operation still took 8 minutes to complete.

I have tried to filter the data as much as I can but the smallest I get is around 90k million rows, which is still pretty damn gigantic.

I also tried to save the “smaller”, filtered dataset in disk (took 3.64 days runtime to save) and reading from that again the next day but same result: still very slow.

This is really slowing me down as (probably due to my own inexperience) I do need to do a lot of displays to see how the data is looking, or check number of rows etc. So I advance really, really slowly.

Do you, overlords of machine learning, have any tricks, tips or ideas for working with such humongous datasets? I do not have the possibility to change anything about the system configuration (btw it’s in Databricks) so I can only implement ideas via code.

Thanks in advance! David

you are viewing a single comment's thread
view the rest of the comments
[–] slashdave@alien.top 1 points 11 months ago (1 children)

Well, a proper sample requires selecting sparsely from the entire dataset. This can be fabulously expensive, because you still have to scan all rows, depending on setup. After all, pySpark cannot generally assume that the data is not changing underneath you.

[–] Davidat0r@alien.top 1 points 11 months ago (1 children)

I’m sorry…I still don’t understand. I thought it I sampled it would be faster? Isn’t that what people do with large datasets? And if it’s like you say, what’s the option during the development phase? I can’t really wait 15 minutes between instructions (if I want to keep my job haha)

[–] slashdave@alien.top 1 points 11 months ago (1 children)

A faithful subsample is a subsample of the current state. The current state cannot be established without a full scan, because you cannot assume that the data has not changed.

As to a solution, just make a subsample and save it in a separate data table. You can use that separate table for development. No reason to be skimpy, a reasonable large (100k) subset will probably be fine.

[–] Davidat0r@alien.top 1 points 11 months ago

So, if I take a sample and save it on disk with spark.write.parquet(…. It will become a separate entity from the original table right?

Sorry you must find these questions so trivial but for a newbie like me your answers are super helpful