• roadrunner_ex@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Yes, testing infrastructure is being put in place and some low-hanging fruit bugs have already been squashed. This bodes well, but it’s still early days, and I imagine not a lot of GIL-less production deployments are out there yet - where the real showstoppers will potentially live.

      I’m tenatively optimistic, but threading bugs are sometimes hard to catch

      • FizzyOrange@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        threading bugs are sometimes hard to catch

        Putting it mildly! Threading bugs are probably the worst class of bugs to debug

        Definitely debatable if this is worth the risk of impossible bugs. Python is very slow, and multi threading isn’t going to change that. 4x extremely slow is still extremely slow. If you care remotely about performance you need to use a different language anyway.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Python can be extremely slow, it doesn’t have to be. I recently re-wrote a stats program at work and got a ~500x speedup over the original python and a 10x speed up over the c++ rewrite of that. If you know how python works and avoid the performance foot-guns like nested loops you can often (though not always) get good performance.

          • FizzyOrange@programming.dev
            link
            fedilink
            arrow-up
            0
            ·
            4 months ago

            Unless the C++ code was doing something wrong there’s literally no way you can write pure Python that’s 10x faster than it. Something else is going on there. Maybe the c++ code was accidentally O(N^2) or something.

            In general Python will be 10-200 times slower than C++. 50x slower is typical.

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              4 months ago

              Nope, if you’re working on large arrays of data you can get significant speed ups using well optimised BLAS functions that are vectorised (numpy) which beats out simply written c++ operating on each array element in turn. There’s also Numba which uses LLVM to jit compile a subset of python to get compiled performance, though I didnt go to that in this case.

              You could link the BLAS libraries to c++ but its significantly more work than just importing numpy from python.

              • FizzyOrange@programming.dev
                link
                fedilink
                arrow-up
                0
                arrow-down
                1
                ·
                4 months ago

                numpy

                Numpy is written in C.

                Numba

                Numba is interesting… But a) it can already do multithreading so this change makes little difference, and b) it’s still not going to be as fast as C++ (obviously we don’t count the GPU backend).

                • Womble@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  edit-2
                  4 months ago

                  Numpy is written in C.

                  Python is written in C too, what’s your point? I’ve seen this argument a few times and I find it bizarre that “easily able to incorporate highly optimised Fortran and C numerical routines” is somehow portrayed as a point against python.

                  Numpy is a defacto extension to the python standard that adds first class support for single type multi-dimensional arrays and functions for working on them. It is implemented in a mixture of python and c (about 60% python according to github) , interfaces with python’s c-api and links in specialist libraries for operations. You could write the same statement for parts of the python std-lib, is that also not python?

                  Its hard to not understate just how much simpler development is in numpy compared to c++, in this example here the new python version was less than 50 lines and was developed in an afternoon, the c++ version was closing in on 1000 lines over 6 files.

                  • FizzyOrange@programming.dev
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    4 months ago

                    Python is written in C too, what’s your point?

                    The point is that eliminating the GIL mainly benefits pure Python code. Numpy is already multithreaded.

                    I think you may have forgotten what we’re talking about.

                    the new python version was less than 50 lines and was developed in an afternoon, the c++ version was closing in on 1000 lines over 6 files.

                    That’s a bit suss too tbh. Did the C++ version use an existing library like Eigen too or did they implement everything from scratch?

                • HyperCube@kbin.run
                  link
                  fedilink
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  4 months ago

                  Numpy is written in C.

                  So you get the best of both worlds then: the speed of C and the ease of use of Python.

                  • FizzyOrange@programming.dev
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    4 months ago

                    Sure but that’s not relevant to the current discussion. The point is that removing the GIL doesn’t affect Numpy because Numpy is written in C.