this post was submitted on 25 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 2 years ago
MODERATORS
 

I’m training an object detection model on yolov8 but my training data is a little biased because it doesn’t represent the real life distribution.
(I want to count objects of one class but different shape in a video and need them to be detected with near equal probability. ) How can I make sure to generalise the model enough so that the bias doesn’t have too much of an effect? I know it will come with more false positives, but that’s not a problem.

top 5 comments
sorted by: hot top controversial new old
[–] KyxeMusic@alien.top 1 points 2 years ago (1 children)

No magic hyperparameter is going to do the trick here.

Your only real option is to try to find a dataset that approaches the real distribution, or to try to somehow augment your data to make it look more like it (although this is very challenging to do without introducing even more bias).

[–] Saffie91@alien.top 1 points 2 years ago

Hyperparameter tuning is out. Its much harder to overfit anything nowadays unless you re trying to using something like yolov8.

Its always been about the dataset.

[–] DisWastingMyTime@alien.top 1 points 2 years ago

What are the differences between the training objects and the real targets?

Generally speaking you're not going to bridge the gap with the hyperparameters, you may do so with augmentations or synthetic data, share more of the issue.

[–] Banntu@alien.top 1 points 2 years ago (1 children)

Yeah I thought that might be the case.
The projects goal is the following:

  1. 3D-Camera capturing live images of fish
  2. yolov8 pose model detecting fish that are completely visible and not facing the camera, also detecting nose and tail of the fish
  3. using the distance values of 3D-camera to calculate length of fish in mm
  4. running the model on a live video stream to estimate the distribution of fish-lengths in a fish-tank

The problem is the following:
I have annotated images of multiple growth stages of fish, but the average growth stage of the fish in the training data will almost always be either smaller or bigger than the ones im measuring.
So when I'm training a model on all data I have and then running the model in a tank of fish that are at the upper end of growth, than the model will detect the smaller fish inside that tank more often, because most fish in the training data are smaller then the fish in the tank.
Does that make sense?

These values are just to show what I mean (Expecting that the model is always trained on all 5k samples)

growth stage 1 month 2 months 3 months 4 months 5 months
training samples 1000 1000 1000 1000 1000
length error on real live data + 10% + 5 % 0 % - 5% -10%
[–] Toilet2000@alien.top 1 points 2 years ago

If that’s not something that you already do, use data augmentation. Especially scaling.

It might not help for features related to age (such as color changes and the likes), but it can definitely help remove the relative size bias, especially if your dataset was created in a single scenario with a fixed camera distance.

If you do know some features of "an older fish", you can also apply those transformations on masks of younger fish from your first trained but biaised model. Somewhat like a "semi-synthetic" data augmentation.

For example, if older fishes are browner, you can skew the hue of the masks by a certain amount to get brownish young fishes.