gevorgter

joined 11 months ago
[–] gevorgter@alien.top 1 points 11 months ago

I am not sure what you call "run ai models"

I train on GPU but then i move to prod and it runs on CPU. The inference is not near as computation intensive as training so CPU might be enough.

I converted my model to onnx format, wrote code in C# and packaged it like AWS lambda function. My AI model is not called constantly and is only a small part of my project. So my AWS bill is literally couple dollars a month.