shubham0204_dev

joined 10 months ago
[–] shubham0204_dev@alien.top 1 points 10 months ago

You may use MobileNet models as they use separable convolutions, which have lesser parameters and execution time than simple/regular convolutions. Moreover, MobileNets are easy to train and setup (tf.keras.applications.* has a pre-trained model) and can be used as a backbone model for fine-tuning on datasets other than the ImageNet.

Further, you can also explore quantization and weight pruning. These are some techniques that can be used to optimize models to have a smaller memory footprint and smaller execution time on embedded devices.