this post was submitted on 21 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

You want to run some heavy tasks in a cloud using GPUs. What would you do?

  • What features are the most important for you in a GPU cloud provider? Is it price, availability, GPU models, or something else?
  • How do you choose an instance type to run? Typically there are dozens of different instances in each provider.
  • Do you regularly use one or more providers?
top 17 comments
sorted by: hot top controversial new old
[–] extreme-jannie@alien.top 1 points 10 months ago

I haven't implemented this yet, but plan to test it soon:
https://skypilot.readthedocs.io/en/latest/

[–] CrazyCrab@alien.top 1 points 10 months ago

First, I tried vast.ai because I had heard good things about it and it had instances with good hardware and not super expensive instances. It was a dumpster fire. Instances having random technical problems, the support never responding.

Then I tried lambda labs. It was super awesome, good price, basically the best thing ever, except they were often out of capacity and didn't have suitable instances available.

Then I heard somewhere about oblivus.ai. Tried it, was also a dumpster fire, basically the same as vast.ai - technical problems with instances (even non-community ones), support that never responded.

Then I tried google cloud compute and it's expensive and complicated but at least it works. That's what I use nowadays.

[–] mrpogiface@alien.top 1 points 10 months ago

Whichever one gives us free credits. So we're on gcp, AWS, and Oracle right now

[–] Appropriate_Ant_4629@alien.top 1 points 10 months ago (1 children)

For work - easy:

  • Microsoft gave us a lot (>$300,000) of free credits.

For hobby projects - hard:

  • not happy with anything yet; but using Microsoft because of familiarity.
[–] someone383726@alien.top 1 points 10 months ago (1 children)

Wow, what kind of situation were you in to get that many credits?

[–] Appropriate_Ant_4629@alien.top 1 points 10 months ago

Multiple different initiatives. The exact same program doesn't exist anymore, but there are similar ones:

[–] dafuqq8@alien.top 1 points 10 months ago (1 children)

When considering platforms for machine learning operations, I lean towards GCP or Azure. They both offer straightforward MLOps solutions, and I’m well-knowledges in their infrastructures.

Questions: 1. Their financial and time efficiency in complex tasks. 2. Each machine type on such platforms comes with a detailed specification sheet. Usually, I do some sort of calculation that allow for the correct machine type selection. It’s important to understand that machine learning often requires substantial VRAM and usually it’s your main spec to look for. For instance, training a 7 billion parameter model would need approximately (7b*4)/2^30 GB of VRAM. A lower-level A100(40GB) GPU might suffice for this, but for larger models, you’d need a higher-end A100(80GB) to accommodate all the data. Consequently you need to be aware of your requirements. 3. Personally, I prefer Google’s Vertex AI, Colab Pro, and local development.

[–] chief167@alien.top 1 points 10 months ago

I find azure terrible for ml in general. They basically force you on databricks, azure ml studio just sucks compared to gcp vertex.

We're now on teradata for mlops, and it's surprisingly ok. High entry cost but overall a lot cheaper than what we used to have on azure/databricks, faster, and better.

We are forced on azure at work, but I used vertex for hobby projects

[–] GinjaTurtles@alien.top 1 points 10 months ago

I recently scoured a bunch of the internet looking for good options out there for a custom deep learning API project I’m working on (I wanted to host my API on cloud GPUs and be able to scale it and not burn my wallet)

The big guys (Microsoft, AWS, google) can be pretty pricey for cloud GPUs especially for side projects. But I think they give startup credits which can be very useful

Tensordock marketplace ended up my top choice. You can get some really cheap 4090/3090s (0.3-0.4 an hour) that are running in data centers. I haven’t had any issues with them yet.

Other options I found was Runpod 0.44hr for 3090

Genesis cloud 0.3hr for 3080

Vast AI is also really cheap GPUs but heard mixed things about them

Google cloud has a T4 for I believe 0.35hr

I’m not paid by any of these companies to promote their stuff. Hopefully this comment helps someone

[–] edsgoode@alien.top 1 points 10 months ago

We're currently building the "kayak for GPU clouds" at https://shadeform.ai

You currently can check out every provider's price and availability as well as launch instances into clouds or schedule them ahead of time if they aren't available.

We notice customers these days are using multiple provider's due to low availability and the massive price discrepancies we see.

[–] edsgoode@alien.top 1 points 10 months ago

We're currently building the "kayak for GPU clouds" at https://shadeform.ai

You currently can check out every provider's price and availability as well as launch instances into clouds or schedule them ahead of time if they aren't available.

We notice customers these days are using multiple provider's due to low availability and the massive price discrepancies we see.

[–] GinjaTurtles@alien.top 1 points 10 months ago

I recently scoured a bunch of the internet looking for good options out there for a custom deep learning API project I’m working on (I wanted to host my API on cloud GPUs and be able to scale it and not burn my wallet)

The big guys (Microsoft, AWS, google) can be pretty pricey for cloud GPUs especially for side projects. But I think they give startup credits which can be very useful

Tensordock marketplace ended up my top choice. You can get some really cheap 4090/3090s (0.3-0.4 an hour) that are running in data centers. I haven’t had any issues with them yet.

Other options I found was Runpod 0.44hr for 3090

Genesis cloud 0.3hr for 3080

Vast AI is also really cheap GPUs but heard mixed things about them

Google cloud has a T4 for I believe 0.35hr

I’m not paid by any of these companies to promote their stuff. Hopefully this comment helps someone

[–] demauri_warren@alien.top 1 points 10 months ago

i usually go for the cloud provider with the best price and availability. when choosing an instance type, i check the specs and calculate the VRAM needed for my tasks. i stick to Google's Vertex AI, Colab Pro, and local development for my work.

[–] General_Service_8209@alien.top 1 points 10 months ago

I‘m using Lambdalabs, mostly because I’m working on my own and don’t have a high budget, and they’re the cheapest option I found for what they offer. Additionally, you can create and end instances as you like and are billed for them by the minute, which also helps me.

I‘ve looked at services like AWS and Azure as well, but those seem more like they want you to do everything on their ecosystem. So I’d need to spend probably a week or so figuring out how to use their tools before I can do anything.

On Lambdalabs, you instead upload your SSH public key, and can then access your instances either through SSH or Jupyterlab, and that’s it. Given that all I need to do is set up a venv, clone my repo and run raytune scripts, this is perfect since there’s no unnecessary bloat at all.

As for instance types, 90% of the time I just take whatever is available, since they’re usually almost booked out. But if I have the choice, I adjust it to the type of network I‘m training. For example, some architectures like RNNs or anything that needs Fourier transforms don’t benefit as much from recent hardware as, say, attention layers. So in that case you’re getting better value on an older instance, but if you have a transformer, it’s the other way around.

[–] General_Service_8209@alien.top 1 points 10 months ago

I‘m using Lambdalabs, mostly because I’m working on my own and don’t have a high budget, and they’re the cheapest option I found for what they offer. Additionally, you can create and end instances as you like and are billed for them by the minute, which also helps me.

I‘ve looked at services like AWS and Azure as well, but those seem more like they want you to do everything on their ecosystem. So I’d need to spend probably a week or so figuring out how to use their tools before I can do anything.

On Lambdalabs, you instead upload your SSH public key, and can then access your instances either through SSH or Jupyterlab, and that’s it. Given that all I need to do is set up a venv, clone my repo and run raytune scripts, this is perfect since there’s no unnecessary bloat at all.

As for instance types, 90% of the time I just take whatever is available, since they’re usually almost booked out. But if I have the choice, I adjust it to the type of network I‘m training. For example, some architectures like RNNs or anything that needs Fourier transforms don’t benefit as much from recent hardware as, say, attention layers. So in that case you’re getting better value on an older instance, but if you have a transformer, it’s the other way around.

[–] demauri_warren@alien.top 1 points 10 months ago

i usually go for the cloud provider with the best price and availability. when choosing an instance type, i check the specs and calculate the VRAM needed for my tasks. i stick to Google's Vertex AI, Colab Pro, and local development for my work.

[–] Terrible_Button_1763@alien.top 1 points 10 months ago

GCP and Azure ui and useability seem better than AWS.