this post was submitted on 28 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

Hey everyone, I’m fairly new at the field and I’m working on a regression model in a huge dataset. We use pySpark for it, since the full size is around 150.000M rows.

Given this size, every little step of the process is painfully slow. Every count operation, display, etc.

I have of course tried to sample the dataset toa fraction of the original while I work on the development of the model (like df = df.sample(0.00001)) but it doesn’t really make much of an impact in time. I tried sampling it so that the reduced dataset would only be 1000 rows and a display operation still took 8 minutes to complete.

I have tried to filter the data as much as I can but the smallest I get is around 90k million rows, which is still pretty damn gigantic.

I also tried to save the “smaller”, filtered dataset in disk (took 3.64 days runtime to save) and reading from that again the next day but same result: still very slow.

This is really slowing me down as (probably due to my own inexperience) I do need to do a lot of displays to see how the data is looking, or check number of rows etc. So I advance really, really slowly.

Do you, overlords of machine learning, have any tricks, tips or ideas for working with such humongous datasets? I do not have the possibility to change anything about the system configuration (btw it’s in Databricks) so I can only implement ideas via code.

Thanks in advance! David

you are viewing a single comment's thread
view the rest of the comments
[–] Davidat0r@alien.top 1 points 9 months ago (5 children)

Nope. I sampled the dataset so that it’d be around 1000 rows. I did it with pyspark’s sample ().

Then a display operation of that tiny dataset took around 8 minutes.

So I’m thinking that maybe spark’s lazy evaluation had something to do with this? The original DF is that brutally huge so maybe it plays a role?

I tried creating a dummy df from scratch with 10k rows and displaying it. And as expected it goes pretty fast. So I really think it must be somehow linked to the size of the original df.

[–] slashdave@alien.top 1 points 9 months ago (3 children)

Well, a proper sample requires selecting sparsely from the entire dataset. This can be fabulously expensive, because you still have to scan all rows, depending on setup. After all, pySpark cannot generally assume that the data is not changing underneath you.

[–] Davidat0r@alien.top 1 points 9 months ago (2 children)

I’m sorry…I still don’t understand. I thought it I sampled it would be faster? Isn’t that what people do with large datasets? And if it’s like you say, what’s the option during the development phase? I can’t really wait 15 minutes between instructions (if I want to keep my job haha)

[–] slashdave@alien.top 1 points 9 months ago (1 children)

A faithful subsample is a subsample of the current state. The current state cannot be established without a full scan, because you cannot assume that the data has not changed.

As to a solution, just make a subsample and save it in a separate data table. You can use that separate table for development. No reason to be skimpy, a reasonable large (100k) subset will probably be fine.

[–] Davidat0r@alien.top 1 points 9 months ago

So, if I take a sample and save it on disk with spark.write.parquet(…. It will become a separate entity from the original table right?

Sorry you must find these questions so trivial but for a newbie like me your answers are super helpful

load more comments (1 replies)