General_Service_8209

joined 10 months ago
[–] General_Service_8209@alien.top 1 points 10 months ago

I‘m using Lambdalabs, mostly because I’m working on my own and don’t have a high budget, and they’re the cheapest option I found for what they offer. Additionally, you can create and end instances as you like and are billed for them by the minute, which also helps me.

I‘ve looked at services like AWS and Azure as well, but those seem more like they want you to do everything on their ecosystem. So I’d need to spend probably a week or so figuring out how to use their tools before I can do anything.

On Lambdalabs, you instead upload your SSH public key, and can then access your instances either through SSH or Jupyterlab, and that’s it. Given that all I need to do is set up a venv, clone my repo and run raytune scripts, this is perfect since there’s no unnecessary bloat at all.

As for instance types, 90% of the time I just take whatever is available, since they’re usually almost booked out. But if I have the choice, I adjust it to the type of network I‘m training. For example, some architectures like RNNs or anything that needs Fourier transforms don’t benefit as much from recent hardware as, say, attention layers. So in that case you’re getting better value on an older instance, but if you have a transformer, it’s the other way around.

[–] General_Service_8209@alien.top 1 points 10 months ago

I‘m using Lambdalabs, mostly because I’m working on my own and don’t have a high budget, and they’re the cheapest option I found for what they offer. Additionally, you can create and end instances as you like and are billed for them by the minute, which also helps me.

I‘ve looked at services like AWS and Azure as well, but those seem more like they want you to do everything on their ecosystem. So I’d need to spend probably a week or so figuring out how to use their tools before I can do anything.

On Lambdalabs, you instead upload your SSH public key, and can then access your instances either through SSH or Jupyterlab, and that’s it. Given that all I need to do is set up a venv, clone my repo and run raytune scripts, this is perfect since there’s no unnecessary bloat at all.

As for instance types, 90% of the time I just take whatever is available, since they’re usually almost booked out. But if I have the choice, I adjust it to the type of network I‘m training. For example, some architectures like RNNs or anything that needs Fourier transforms don’t benefit as much from recent hardware as, say, attention layers. So in that case you’re getting better value on an older instance, but if you have a transformer, it’s the other way around.

[–] General_Service_8209@alien.top 1 points 10 months ago

It sounds like you‘ve come across exactly what I meant.

I have a couple of papers on the topic if you’re interested in those. There’s also a PyTorch implementation of a neural state space model by the authors of the original paper: https://github.com/HazyResearch/state-spaces

[–] General_Service_8209@alien.top 1 points 10 months ago (2 children)

State space models and their derivatives.

They have demonstrated better performance that Transformers on very long sequences, and that with linear instead of quadratic Computational costs, and on paper also generalize better to non-NLP tasks.

However, training them is more difficult, so they perform worse in practice outside of these few very long sequence tasks. But with a bit more development, they could become the most impactful AI technology in years.