this post was submitted on 02 Oct 2025
71 points (93.8% liked)
Programming
22979 readers
255 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I mean that there are successive steps to transform the entire code into tokens, the tokens into an AST, and the AST into some intermediary or final form.
No, it doesn't. Take a look at any of the number of projects that have attempted to compile Java to native code over the years. You'd be lucky to see any substantive gain at all. They sometimes have a use for packaging everything up in a single distributed binary, but you don't do it for speed.
Things like C and Rust are fast because the language semantics can be compiled in a fast way.
We will have to disagree on that. This is all problem spectific, but I have found C code integrated via ctypes, cffi, or by a C extension is over 100x Python alone. Interestingly Python, Numba, and Numpy together which is a more pythonic solution can get to those speeds too.
All of the other approaches I have tried are much slower: Nuitka, Cython, Numpy alone, PyPy, etc.
To get best speeds one has to compile for your specific architecture and enable things like vectorization, auto parallel, and fast math. Most default builds including libraries do not do that.
Of course you did. Those are changing the semantics of the language. For example, things like Numpy store arrays more like how C does it than Python. That makes all the difference, not merely compiling to native code.
You can get about 10x by compiling Python using PyPy. So compiling is not nothing. Using Numpy alone is about 5x which surprised me. There is a lot of missleading stuff out there about how to make Python fast. Lot of people say CPython is pretty fast or that using a binary library like numpy is fast. No CPython is very slow and libraries are not always that fast.
Edit: Another compiler is Numba which is more specialized. It can get 30x on some code without numpy. Again compiling can help.