dafuqq8

joined 10 months ago
[–] dafuqq8@alien.top 1 points 10 months ago (1 children)

When considering platforms for machine learning operations, I lean towards GCP or Azure. They both offer straightforward MLOps solutions, and I’m well-knowledges in their infrastructures.

Questions: 1. Their financial and time efficiency in complex tasks. 2. Each machine type on such platforms comes with a detailed specification sheet. Usually, I do some sort of calculation that allow for the correct machine type selection. It’s important to understand that machine learning often requires substantial VRAM and usually it’s your main spec to look for. For instance, training a 7 billion parameter model would need approximately (7b*4)/2^30 GB of VRAM. A lower-level A100(40GB) GPU might suffice for this, but for larger models, you’d need a higher-end A100(80GB) to accommodate all the data. Consequently you need to be aware of your requirements. 3. Personally, I prefer Google’s Vertex AI, Colab Pro, and local development.

[–] dafuqq8@alien.top 1 points 10 months ago (1 children)

Hey, GCP is a great choice for MLOps. They offer an excellent service for various types of ML applications called Vertex AI.

The tabs you might be looking for are “Deploy and Use” (specifically, the model registry to import a pre-trained model) and “Model Development” for training a model.

Back to your original question: generally, you just need to create a Vertex AI endpoint. For scaling, you can select from many machine and accelerator types. You can then call the endpoint with Cloud Run, Cloud Functions, or the backend of your application using the ‘google-cloud-aiplatform’ SDK… Let me know if you have any troubles with this step.

Also, there is a JavaScript SDK, ‘@google-cloud/aiplatform’, which the Vertex AI team updates less often than the Python SDKs.

Here are some useful links:

Deploying HF models on Vertex AI

Deploying Torch models on Vertex AI

Feel free to ask me any clarification questions.