this post was submitted on 28 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I know the typical answer is "no because all the libs are in python".. but I am kind of baffled why more porting isn't going on especially to Go given how Go like Python is stupid easy to learn and yet much faster to run. Truly not trying to start a flame war or anything. I am just a bigger fan of Go than Python and was thinking coming in to 2024 especially with all the huge money in AI now, we'd see a LOT more movement in the much faster runtime of Go while largely as easy if not easier to write/maintain code with. Not sure about Rust.. it may run a little faster than Go, but the language is much more difficult to learn/use but it has been growing in popularity so was curious if that is a potential option.

There are some Go libs I've found but the few I have seem to be 3, 4 or more years old. I was hoping there would be things like PyTorch and the likes converted to Go.

I was even curious with the power of the GPT4 or DeepSeek Coder or similar, how hard would it be to run conversions between python libraries to go and/or is anyone working on that or is it pretty impossible to do so?

you are viewing a single comment's thread
view the rest of the comments
[–] Dry-Vermicelli-682@alien.top 1 points 11 months ago (5 children)

But the actual training code.. isn't there a crap ton of code that trains the model so that the model can respond with NLP and other capabilities? There has to be code behind all that somewhere? The ability for the "logic" of the AI to do what it does.. that code is python as well yah? I would assume that in Go or Rust or C would execute much faster and this AI could be much faster (and less memory, no python runtime, etc)? Or is there already some back end c/cpp code that does ALL that AI logic/guts, and python even for training models is still just glue that calls in to the c/cpp layer?

[–] m98789@alien.top 1 points 11 months ago (4 children)

Correct, even for training the models, all the Python code you see is really just a friendly interface over highly optimized C/cuda code.

There are no “loops” or matrix multiplication being done in Python. All the heavy lifting is done in lower level highly optimized code.

[–] Dry-Vermicelli-682@alien.top 1 points 11 months ago (3 children)

So most of the python AI coding folks.. aren't writing CUDA/high end math/algo style code.. they're just using the library similar to any other SDK?

[–] m98789@alien.top 1 points 11 months ago (1 children)

Yes. Even the authors of the AI frameworks like PyTorch aren’t usually writing the low level cuda code for NNs. They are wrapping the cuDNN library from NVIDIA which has highly optimized cuda code for NN operations.

[–] East_Layer6606@alien.top 1 points 11 months ago

The deeper you go the less abstraction you get and the more stupid you feel

load more comments (1 replies)
load more comments (1 replies)
load more comments (1 replies)