eamonnkeogh

joined 10 months ago
[–] eamonnkeogh@alien.top 1 points 10 months ago

Great!

“Satisfied customer. We should have him stuffed.” Basil Fawlty:

[–] eamonnkeogh@alien.top 1 points 10 months ago (3 children)

Based on this figure, simple persistence would work very well.

Persistence says, the predicted value, is the same as 24 hours ago.

If the data has cultural effects, then the predicted value, is the same as 7 days ago (because, a Saturday is not like a Friday).

[–] eamonnkeogh@alien.top 1 points 10 months ago

(minor point, DTW is not a metric, it is only a measure)

What you are trying to do is precursor discovery [a].

  1. It is unlikely that the 30 to 60 minue lead time will work for you, it seems too ambitions.

  2. DTW is a great shape measure, it it is unlikely that it will generalize from person to person. In fact, if you change the location of the leads (assuming you are using 12-lead or similar ), it is unlikely to generalize for a single person, one day to another.

  3. You may wish to consider trying to solve this problem in the feature space, not the shape space. The catch22 feature set would be a great starting point [b].

[a] https://www.cs.ucr.edu/%7Eeamonn/WeaklyLabeledTimeSeries.pdf

[b] https://www.dropbox.com/scl/fi/3vs0zsh4tw63qrn46uyf9/C22MP_ICDM.pdf?rlkey=dyux24kqpagh3i38iw6obiomq&dl=0

[–] eamonnkeogh@alien.top 1 points 10 months ago

Have you tried to simply use highly optimized brute force search?

You can optimize with early abandoning and reordering etc.

It is surprising how competitive that can be,

[–] eamonnkeogh@alien.top 1 points 10 months ago

In paper [a] we look at (among other things) weather data, temperature, wind speed, humidity (fig 15, 16 ,18).

The DAMP algorithms is:

  1. Parameter-lite (a single parameter)
  2. Blindingly fast
  3. Can be used with or without training data.
  4. Can be used domain agnostic, or exploit domain knowledge.

And, in bake-offs, has been show to be SOTA...

[a] https://www.cs.ucr.edu/%7Eeamonn/DAMP_long_version.pdf