this post was submitted on 28 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I know the typical answer is "no because all the libs are in python".. but I am kind of baffled why more porting isn't going on especially to Go given how Go like Python is stupid easy to learn and yet much faster to run. Truly not trying to start a flame war or anything. I am just a bigger fan of Go than Python and was thinking coming in to 2024 especially with all the huge money in AI now, we'd see a LOT more movement in the much faster runtime of Go while largely as easy if not easier to write/maintain code with. Not sure about Rust.. it may run a little faster than Go, but the language is much more difficult to learn/use but it has been growing in popularity so was curious if that is a potential option.

There are some Go libs I've found but the few I have seem to be 3, 4 or more years old. I was hoping there would be things like PyTorch and the likes converted to Go.

I was even curious with the power of the GPT4 or DeepSeek Coder or similar, how hard would it be to run conversions between python libraries to go and/or is anyone working on that or is it pretty impossible to do so?

you are viewing a single comment's thread
view the rest of the comments
[–] squareOfTwo@alien.top 1 points 11 months ago (2 children)

Python is a degenerated language without strong typing etc. which will die out at some point just like Perl or Cobol. Don't listen to short cut answers like "Python is only glue!!!!".

Not all ML workload is best to be written in Python.

Use your tools wisely.

[–] Dry-Vermicelli-682@alien.top 1 points 11 months ago (1 children)

I agree with you here. I suspect from most answers I've read/seen here and other places, that Python was chosen way back when due to dynamic and creative capabilities (being dynamic) and that for most cases it was plenty fast/good enough and just stuck. It's also "easier" for the not so code capable folks to learn and dabble with because its dynamic.. easier to grasp assigning any value to a variable than specifically declaring the type of variables and being stuck using it for just that type.

But I would think.. and thus my reason for asking this, that today, nearing 2024, with all our experience in languages, threading, more and more cpu core counts, etc.. we'd want a faster binary/native runtime than an interpreted single threaded heavy resource usage language like python or nodejs to really speed up and take advantage of todays hardware.

But some have said that the underlying "guts" are c/c++ binary and python or any language more or less calls those bits, so hence the "glue" code. In those cases, I can agree that it may not matter as much.

I was thinking (and still trying to learn a ton more about) the training aspect.. if that is done on the GPU, the code that runs on it if it can run as fast as it can, it would reduce time to train, thus making it possible to train much faster more specialized models. What do I know though. I started my AI journey literally a week ago and am trying to grasp what I imagine has taken years of development in to my old brain.

[–] squareOfTwo@alien.top 1 points 11 months ago

Julia offers as a language a good tool for some ML tasks. Also some libraries are usable for ML https://juliapackages.com/p/autograd . Most of it is for CPU but there are some GPU libs too.