this post was submitted on 28 Nov 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I suspect the OP means 1,000M, or 1 billion rows. Nothing else makes sense.
Nope. I sampled the dataset so that it’d be around 1000 rows. I did it with pyspark’s sample ().
Then a display operation of that tiny dataset took around 8 minutes.
So I’m thinking that maybe spark’s lazy evaluation had something to do with this? The original DF is that brutally huge so maybe it plays a role?
I tried creating a dummy df from scratch with 10k rows and displaying it. And as expected it goes pretty fast. So I really think it must be somehow linked to the size of the original df.
I think you're right about the lazy eval. Can you somehow materialize or dump/reimport the 1000 rows view to use for experimentation.
FWIW sampling 1000 rows at random is the same as permuting the entire dataset at random and reading out the first 1000 rows, not sure if that would be feasible or help in your case, but merge sort would make this an O(n log n) operation, so in theory it should not be too horrible.
Well, a proper sample requires selecting sparsely from the entire dataset. This can be fabulously expensive, because you still have to scan all rows, depending on setup. After all, pySpark cannot generally assume that the data is not changing underneath you.
I’m sorry…I still don’t understand. I thought it I sampled it would be faster? Isn’t that what people do with large datasets? And if it’s like you say, what’s the option during the development phase? I can’t really wait 15 minutes between instructions (if I want to keep my job haha)
A faithful subsample is a subsample of the current state. The current state cannot be established without a full scan, because you cannot assume that the data has not changed.
As to a solution, just make a subsample and save it in a separate data table. You can use that separate table for development. No reason to be skimpy, a reasonable large (100k) subset will probably be fine.
So, if I take a sample and save it on disk with spark.write.parquet(…. It will become a separate entity from the original table right?
Sorry you must find these questions so trivial but for a newbie like me your answers are super helpful