this post was submitted on 12 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

I am currently researching ways to export models that I trained with Pytorch on a GPU to a microcontroller for inference. Think CM0 or a simple RISC-V. The ideal workflow would be to export c-sourcecode with as little dependencies as possible, so that it is completely platform agnostic.

What I noticed in general is that most edge inference frameworks are based on tensorflow lite. Alternatively there are some closed workflows, like Edge Impulse, but I would prefer locally hosted OSS. Also, there seem to be many abandoned projects. What I found so far:

Tensorflow lite based

Pytorch based

  • PyTorch Edge / Executorch Sounds like this could be a response to tflite, but it seems to target intermediate systems. Runtime is 50kb...
  • microTVM. Targeting CM4, but claims to be platform agnostic.

ONNX

  • DeepC. Open source version of DeepSea. Very little activity, looks abandoned
  • onnx2c - onnx to c sourcecode converter. Looks interesting, but also not very active.
  • cONNXr - framework with C99 inference engine. Also interesting and not very active.

Are there any recommendations out of those for my use case? Or anything I have missed? It feels like there no obvious choice for what I am trying to do.

Most solutions that seem to hit the mark look rather abandoned. Is that because I should try a different approach or is the field of ultra-tiny-ml OSS in general not so active?

you are viewing a single comment's thread
view the rest of the comments
[–] neodsp@alien.top 1 points 1 year ago

Other interesting runtimes:

- ggml: https://github.com/ggerganov/ggml

- burn: https://github.com/burn-rs/burn (runs on micro platforms, as long as you provide an allocator and has onnx import)