this post was submitted on 28 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I know the typical answer is "no because all the libs are in python".. but I am kind of baffled why more porting isn't going on especially to Go given how Go like Python is stupid easy to learn and yet much faster to run. Truly not trying to start a flame war or anything. I am just a bigger fan of Go than Python and was thinking coming in to 2024 especially with all the huge money in AI now, we'd see a LOT more movement in the much faster runtime of Go while largely as easy if not easier to write/maintain code with. Not sure about Rust.. it may run a little faster than Go, but the language is much more difficult to learn/use but it has been growing in popularity so was curious if that is a potential option.

There are some Go libs I've found but the few I have seem to be 3, 4 or more years old. I was hoping there would be things like PyTorch and the likes converted to Go.

I was even curious with the power of the GPT4 or DeepSeek Coder or similar, how hard would it be to run conversions between python libraries to go and/or is anyone working on that or is it pretty impossible to do so?

you are viewing a single comment's thread
view the rest of the comments
[–] hanzabu@alien.top 1 points 11 months ago (2 children)

the slow python you think you r using is in fact optimized c code behind the scenes. the real next evolution is to decouple performance from GPUs and return to a more sane GPU/CPU and RAM combo. and creating new serious alternatives to CUDA.

[–] Dry-Vermicelli-682@alien.top 1 points 11 months ago (1 children)

Now that would be great. I was in the process of pricing out a 32 and 64 core Threadripper.. thinking it would work well for my next desktop but could also run AI stuff locally very nicely until all this "must use GPU" stuff hit me. So it would be fantastic to be able to take advantage of cpu cores not in use for things like this.

[–] hanzabu@alien.top 1 points 11 months ago

that's what apple r doing with their M3 architecture, they r not creating those insane specs for the beauty of it, i think they r trying to offload their data centers to a worldwide fleet of consumer machines, and that's the smart move.