this post was submitted on 02 Oct 2025
71 points (93.8% liked)

Programming

22992 readers
459 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] frezik@lemmy.blahaj.zone 0 points 5 days ago* (last edited 5 days ago) (1 children)

This is all problem spectific, but I have found C code integrated via ctypes, cffi, or by a C extension is over 100x Python alone. Interestingly Python, Numba, and Numpy together which is a more pythonic solution can get to those speeds too.

Of course you did. Those are changing the semantics of the language. For example, things like Numpy store arrays more like how C does it than Python. That makes all the difference, not merely compiling to native code.

[โ€“] furrowsofar@beehaw.org 3 points 5 days ago* (last edited 5 days ago)

You can get about 10x by compiling Python using PyPy. So compiling is not nothing. Using Numpy alone is about 5x which surprised me. There is a lot of missleading stuff out there about how to make Python fast. Lot of people say CPython is pretty fast or that using a binary library like numpy is fast. No CPython is very slow and libraries are not always that fast.

Edit: Another compiler is Numba which is more specialized. It can get 30x on some code without numpy. Again compiling can help.