Complex-Indication

joined 10 months ago
[โ€“] Complex-Indication@alien.top 1 points 10 months ago (1 children)

I think you are spot on in your assessment: tflite micro was the first production ready framework to deploy NN to microcontrollers and by now still is the most popular/streamlined. Realistically speaking, you probably should evaluate which path would bring you to your goal faster: converting pytorch to onnx and then to tflite micro route or using less known and maintained project to run onnx model directly.

One question: Since you mentioned Edge Impulse, why would you want to go self-hosted OSS route?

You can deploy computer vision model pretty much on anything, depending on your requirements (model size/inference time).

I see you put some wildly different boards there (pico4ml is Cortex M0 based and is extremely low power, but really can only run tiny models all the way to Jetson series, which are GPU enabled MPU-based boards). That probably means you have no clear idea on what models you are going to be running - and that really should be a starting point.

But since you mentioned "manufacturing robots, drones, autonomous robots"", then I can tell you that Pico4ML and likely Raspberry Pi are out of question. Jetson Nano is getting EoLd and Nvidia is moving to Orin - that one is very capable, but also very expensive and power-hungry.

To see some other options, there is a nice list here https://docs.edgeimpulse.com/docs/development-platforms/fully-supported-development-boards (Disclaimer: I work for EI - but you don't have to use them).

Also, I have a YT channel on Edge ML and Robotics https://www.youtube.com/c/hardwareai