this post was submitted on 25 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

The thing I have is randomly sampling days from the year however this runs into flaws of the day just not being optimal. Other research papers in related work has two years one for training and another for testing. I only have one year and asked the professor for another year of data and he said it isn’t available with no explanation behind it. I tried searching online for the data since it’s supposedly public data but can’t find it. It’s kind of hard to get an accurate display of evaluation when you don’t have a good test environment. Instead of complaining I have to figure out something. This is in the works in trying to get published to a smaller journal. Not sure if any of you all had this so curious how you would handle such situations?

At least an idea

  • Sample months instead of just time samples and create a pseudo environment and run it through there. Picking diverse set that has similar yearly trends.

I don’t have much experience so open ears when you all had similar limitations and how you all overcame it.

you are viewing a single comment's thread
view the rest of the comments
[–] ahf95@alien.top 1 points 11 months ago (2 children)

You will likely want your split whatever data you have. Probably just use a standard train:test split, with cross validation, as said in another comment. Also, you said in a comment that this was in an RL context, but if that were the case then you’d most likely be generating the next dataset after training on what you already have, so you’d know that you have more data on the way. So, are you solving a markov decision problem here, or is this just a applied form of supervised learning?

[–] I_will_delete_myself@alien.top 1 points 11 months ago (1 children)

so, are you solving a markov decision problem here

Yes. I am thinking of just using a metric to see if it made the optimal decision by the amount of value it delivers per capita.

The main flaw from my previous metric is that it had a bias towards naive algorithms because the way its calculated which leads results to be misleading from reality. Skipping turns is sometimes the optimal decision which the metric said it was bad, but reality it isn't.

When I dug closer into the data it turns out the AI was destroying the naive algorithms with this metric and the total results we were aiming for.

[–] ahf95@alien.top 1 points 11 months ago

Dude, okay, are you actually doing research or is this troll? Or do you have a history of mental illness?