SymPy 0.7.0 released

June 29, 2011

Cross posted on the official SymPy blog

SymPy 0.7.0 has been released on June 28, 2011. It is available at

The source distribution can be downloaded from:

You can get the Windows installer here:

And the html documentation here:

About SymPy

SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python.

Changes since last stable release


Backwards compatibility breaks

  • This will be the last release of SymPy to support Python 2.4. Dropping support for Python 2.4 will let us move forward with things like supporting Python 3, and will let us use things that were introduced in Python 2.5, like with-statement context managers.
  • no longer support creating matrices without brackets (see: issue 930)
  • Renamed sum() to summation() (see: 3e763a8, issues 1376, 1727). This was changed so that it no longer overrides the built-in sum(). The unevaluated summation is still called Sum().
  • Renamed abs() to Abs() (see: 64a12a4, issue 1727). This was also changed so that it no longer overrides the built-in abs(). Note that because of __abs__ magic, you can still do abs(expr) with the built-in abs(), and it will return Abs(expr).
  • Renamed max_() and min_() to now Max() and Min() (see: 99a271e, issue 2153)
  • Changed behaviour of symbols(). symbols('xyz') gives now a single symbol ('xyz'), not three ('x', 'y' and 'z') (see: f6452a8). Usesymbols('x,y,z') or symbols('x y z') to get three symbols. The ‘each_char’ option will still work but is being deprecated.
  • Split class Basic into new classes Expr, Boolean (see: a0ab479, 635d89c). Classes that are designed to be part of standard symbolic expressions (like x**2*sin(x)) should subclass from Expr. More generic objects that do not work in symbolic expressions but still want the basic SymPy structure like .args and basic methods like .subs() should only subclass from Basic.
  • as_basic() method was renamed to as_expr() to reflect changes in the core (see: e61819d, 80dfe91)
  • Methods as_coeff_terms and as_coeff_factors were renamed to as_coeff_mul and as_coeff_add, respectively.
  • Removed the trim() function. The function is redundant with the new polys (see below). Use the cancel() function instead.

Major Changes


  • New internal representations of dense and sparse polynomials (see: 6aecdb7, 31c9aa4)
  • Implemented algorithms for real and complex root isolation and counting (see: 3acac67, 4b75dae, fa1206e, 103b928, 45c9b22, 8870c8b, b348b30)
  • Improved Gröbner bases algorithm (see: ff65e9f, 891e4de, 310a585)
  • Field isomorphism algorithm (see: b097b01, 08482bf)
  • Implemented efficient orthogonal polynomials (see: b8fbd59)
  • Added configuration framework for polys (see: 33d8cdb, 7eb81c9)
  • Function for computing minimal polynomials (see: 88bf187, f800f95)
  • Function for generating Viete’s formulas (see: 1027408)
  • roots() supports more classes of polynomials (e.g. cyclotomic) (see: d8c8768, 75c8d2d)
  • Added a function for recognizing cyclotomic polynomials (see: b9c2a9a)
  • Added a function for computing Horner form of polynomials (see: 8d235c7)
  • Added a function for computing symmetric reductions of polynomials (see: 6d560f3)
  • Added generators of Swinnerton-Dyer, cyclotomic, symmetric, random and interpolating polynomials (see: dad03dd, 6ccf20c, dc728d6, 2f17684, 3004db8)
  • Added a function computing isolation intervals of algebraic numbers (see: 37a58f1)
  • Polynomial division (div(), rem(), quo()) now defaults to a field (see: a72d188)
  • Added wrappers for numerical root finding algorithms (see: 0d98945, f638fcf)
  • Added symbolic capabilities to factor(), sqf() and related functions (see: d521c7f, 548120b, f6f74e6, b1c49cd, 3527b64)
  • together() was significantly improved (see: dc327fe)
  • Added support for iterable containers to gcd() and lcm() (see: e920870)
  • Added a function for constructing domains from coefficient containers (see: a8f20e6)
  • Implemented greatest factorial factorization (see: d4dbbb5)
  • Added partial fraction decomposition algorithm based on undetermined coefficient approach (see: 9769d49, 496f08f)
  • RootOf and RootSum were significantly improved (see: f3e432, 4c88be6, 41502d7)
  • Added support for gmpy (GNU Multiple Precision Arithmetic Library) (see: 38e1683)
  • Allow to compile sympy.polys with Cython (see: afb3886)
  • Improved configuration of variables in Poly (see: 22c4061)
  • Added documentation based on Wester’s examples (see: 1c23792)
  • Irreducibility testing over finite fields (see: 17e8f1f)
  • Allow symmetric and non-symmetric representations over finite fields (see: 60fbff4)
  • More consistent factorization forms from factor() and sqf() (see: 5df77f5)
  • Added support for automatic recognition algebraic extensions (see: 7de602c)
  • Implemented Collins’ modular algorithm for computing resultants (see: 950969b)
  • Implemented Berlekamp’s algorithm for factorization over finite fields (see: 70353e9)
  • Implemented Trager’s algorithm for factorization over algebraic number fields (see: bd0be06)
  • Improved Wang’s algorithm for efficient factorization of multivariate polynomials (see: 425e225)


  • Symbolic, abstract dirac notation in sympy.physics.quantum. This includes operators, states (bras and kets), commutators, anticommutators, dagger, inner products, outer products, tensor products and Hilbert spaces
  • Symbolic quantum computing framework that is based on the general capabilities in sympy.physics.quantum. This includes qubits (sympy.physics.quantum.qubit), gates (sympy.physics.quantum.gate), Grover’s algorithm (sympy.physics.quantum.grover), the quantum Fourier transform (sympy.physics.quantum.qft), Shor’s algorithm (sympy.physics.quantum.shor) and circuit plotting (sympy.physics.quantum.circuitplot)
  • Second quantization framework that inclues creation/anihilation operators for both Fermions and Bosons and Wick’s theorem for Fermions (sympy.physics.secondquant).
  • Symbolic quantum angular momentum (spin) algebra (sympy.physics.quantum.spin)
  • Hydrogen wave functions (Schroedinger) and energies (both Schroedinger and Dirac)
  • Wave functions and energies for 1D harmonic oscillator
  • Wave functions and energies for 3D spherically symmetric harmonic oscillator
  • Wigner and Clebsch Gordan coefficients

Everything else

  • Implement symarray, providing numpy nd-arrays of symbols.
  • update mpmath to 0.16
  • Add a tensor module (see:
  • A lot of stuff was being imported with from sympy import * that shouldn’t have been (like sys). This has been fixed.



  • Finalized implementation of Gosper’s algorithm (see: 0f187e5, 5888024)
  • Removed redundant Sum2 and related classes (see: ef1f6a7)


  • Split Atom into Atom and AtomicExpr (see: 965aa91)
  • Various sympify() improvements
  • Added functionality for action verbs (many functions can be called both as global functions and as methods e.g. a.simplify() == simplify(a))
  • Improve handling of rational strings (see: 053a045, issue 1778)
  • Major changes to factoring of integers (see: 273f450, issue 2003)
  • Optimized .has() (see: c83c9b0, issue 1980; d86d08f)
  • Improvements to power (see: c8661ef, issue 1963)
  • Added range and lexicographic syntax to symbols() and var() (see: f6452a8, 9aeb220, 957745a)
  • Added modulus argument to expand() (see: 1ea5be8)
  • Allow to convert Interval to relational form (see: 4c269fe)
  • SymPy won’t manipulate minus sign of expressions any more (see: 6a26941, 9c6bf0f, e9f4a0a)
  • Real and .is_Real were renamed to Float and .is_Float. Real and .is_Real still remain as deprecated shortcuts to Float andis_Float for backwards compatibility. (see: abe1c49)
  • Methods coeff and as_coefficient are now non-commutative aware. (see a4ea170)


  • Various improvements to Ellipse
  • Updated documentation to numpy standard
  • Polygon and Line improvements
  • Allow all geometry objects to accept a tuple as Point args


  • Various improvements (see eg. issues 1772, 1999, 1992, 1987.. etc)


  • Fixed the -p switch (see: e8cb04a)
  • Caching can be disabled using -C switch (see: 0d8d748)
  • Ground types can be set using -t switch (see: 75734f8)
  • Printing ordering can be set using -o switch (see: fcc6b13, 4ec9dc5)


  • implies object adheres to negative normal form
  • Create new boolean class, logic.boolalg.Boolean
  • Added XOR operator (^) support
  • Added If-then-else (ITE) support
  • Added the dpll algorithm


  • Added Piecewise, B-splines
  • Spherical Bessel function of the second kind implemented
  • Add series expansions of multivariate functions (see: d4d351d)


  • Add elementwise product (Hadamard product)
  • Extended QR factorization for general full ranked mxn matrices
  • Remove deprecated functions zero(), zeronm(), one() (see: 5da0884)
  • Added cholesky and LDL factorizations, and respective solves.
  • Added functions for efficient triangular and diagonal solves.
  • SMatrix was renamed to SparseMatrix (see: acd1685)


  • See the Quantum section


  • Implemented pretty printing of binomials (see: 58c1dad)
  • Implemented pretty printing of Sum() (see: 84f2c22, 95b4321)
  • sympy.printing now supports ordering of terms and factors (see: 859bb33)
  • Lexicographic order is now the default. Now finally things will print as x**2 + x + 1 instead of 1 + x + x**2, however series still print using reversed ordering, e.g. x - x**3/6 + O(x**5). You can get the old order (and other orderings) by setting the -o option to isympy (see: 08b4932, a30c5a3)


  • Implement a function to calculate residues, residue()
  • Implement nseries and lseries to handle x0 != 0, series should be more robust now (see: 2c99999, issues 2122-2124)
  • Improvements to Gruntz algorithm


  • Added use() (see: 147c142)
  • ratsimp() now uses cancel() and reduced() (see: 108fb41)
  • Implemented EPath (see: 696139d, bf90689)
  • a new keyword rational was added to nsimplify which will replace Floats with Rational approximations. (see: 053a045)


  • ODE improvements (see: d12a2aa, 3542041; 73fb9ac)
  • Added support for solving inequalities (see: 328eaba, 8455147, f8fcaa7)


  • Improve cartes, for generating the Cartesian product (see: b1b10ed)
  • Added a function computing topological sort of graphs (see: b2ce27b)
  • Allow to setup a customized printer in lambdify() (see: c1ad905)
  • flatten() was significantly improved (see: 31ed8d7)
  • Major improvements to the Fortran code generator (see:, 3383aa3, 7ab2da2, etc.)

In addition to the more noticeable changes listed above, there have been numerous other smaller additions, improvements and bug fixes in the ~2000 commits in this release. See the git log for a full list of all changes. The command git log sympy-0.6.7..sympy-0.7.0 will show all commits made between this release and the last. You can also see the issues closed since the last release here.


The following people contributed at least one patch to this release (names are given in alphabetical order by last name). A total of 64 people contributed to this release. People with a * by their names contributed a patch for the first time for this release. Thirty-seven people contributed for the first time for this release. Over half of the people who contributed to this release contributed for the first time!

Thanks to everyone who contributed to this release!

  • Tom Bachmann*
  • Tomas Bambas*
  • Matthew Brett*
  • Ondřej Čertík
  • Renato Coutinho
  • Addison Cugini*
  • Matt Curry*
  • Raffaele De Feo*
  • Mark Dewing
  • Thomas Dixon*
  • Harold Erbin
  • Pavel Fedotov*
  • Gilbert Gede*
  • Oleksandr Gituliar*
  • Brian Granger
  • Alexey U. Gudchenko*
  • Øyvind Jensen
  • Fredrik Johansson
  • Felix Kaiser
  • Yuri Karadzhov*
  • Gary Kerr*
  • Kibeom Kim*
  • Nicholas J.S. Kinar*
  • Anatolii Koval*
  • Sebastian Krämer
  • Ryan Krauss
  • Gregory Ksionda*
  • Priit Laes
  • Vladimir Lagunov
  • Ronan Lamy
  • Tomo Lazovich*
  • Saptarshi Mandal*
  • David Marek
  • Jack McCaffery*
  • Benjamin McDonald*
  • Aaron Meurer
  • Christian Muise*
  • Óscar Nájera*
  • Jezreel Ng*
  • Sherjil Ozair*
  • Mateusz Paprocki
  • James Pearson
  • Fernando Perez
  • Vladimir Perić*
  • Mario Pernici*
  • Nicolas Pourcelot
  • rayman*
  • Matthew Rocklin*
  • Christian Schubert
  • Andre de Fortier Smit*
  • Chris Smith
  • Cristóvão Sousa*
  • Akshay Srinivasan
  • Vinzent Steinberg
  • Prafullkumar P. Tale*
  • Andy R. Terrel
  • Kazuo Thow*
  • Toon Verstraelen
  • Sean Vig*
  • Luca Weihs*
  • Thomas Wiecki
  • Shai ‘Deshe’ Wyborski*
  • Jeremias Yehdegho*

import_module (the (hopefully) last fix for 0.7.0)

June 24, 2011

So everything seemed to be ready to go for the 0.7.0 release, when someone pointed out a test failure in Python 2.5.

It turned out that there is a bug in numpy (see the numpy issue page for more information), that was causing the quantum module to fail entirely when imported in Python 2.5 with numpy installed.

Because there was no easy way around the bug, the solution was to disable numpy completely in Python 2.5 in the quantum module. But this entailed writing the following code idiom

import sys
import warnings

numpy_supported = True
if sys.version_info < (2, 6):
    warnings.warn("Cannot use numpy in Python 2.4/2.5.")
    numpy_supported = False
        import numpy as np
    except ImportError:
        numpy_supported = False

in all the half dozen files that import numpy in the quantum module.

So clearly, SymPy needed a more centralized way to handle importing optional external modules. Hence, I wrote the import_module() function. The function attempts to import a module given it’s name. It returns the module if it can be imported and None if it cannot. It supports checking the version of Python or the version of the library and not importing it if either are too old. It also supports emitting warnings when the module is not available or the version is too old. Thus, the above idiom reduces to

from sympy.external import import_module

np = import_module('numpy', min_python_version=(2, 6))

And that’s it. The function will automatically warn if numpy cannot be imported because the Python version is too old.

Any kind of numpy_supported variable in the code can be replaced by testing the np variable itself, like

if np:
    # Do some numpy stuff
    # np is None
    # Do whatever you do when it is not available

This method has an additional advantage, which is that the warnings can be customized by setting variable hooks. So, for example, the test runner can disable all warnings by doing

import sympy.external
sympy.external.importtools.WARN_OLD_VERSION = False
sympy.external.importtools.WARN_NOT_INSTALLED = False

I actually did make the test runner do this, and also set both to True when the SYMPY_DEBUG environment variable is set to True (by default, WARN_NOT_INSTALLED is False and WARN_OLD_VERSION is True).

There are some caveats. First, note that the function does it’s magic using the built-in __import__() function. To import a submodule (like sympy.quantum), you need to pass some stuff to the fromlist argument of __import__(). It’s also a good idea to pass names to this if you plan to replicate from module import stuff (instead of just import module), because some modules use lazy importing and other magic that prevent you from accessing names directly from module.stuff without importing stuff first.

To do this, just pass the arguments to __import__() to import_module() using the __import__kwargs keyword argument, like

from sympy.external import import_module
# Do this instead of "from matplotlib import pyplot" or "import matplotlib.pyplot as pyplot"
matplotlib = import_module('matplotlib', __import__kwargs={'fromlist':['pyplot']})
pyplot = matplotlib.pyplot

Second, for module version checking it looks at module.__version__. Some modules use a different method (for example, gmpy). You can use the other method by passing the proper arguments to import_module(). For example, versions of gmpy lower than 1.03 have a bug that prevent its use in SymPy (basically, int(large mpz) did not automatically convert the number to a long). So to import gmpy, but only if it’s version 1.03 or newer, you would use

from sympy.external import import_module
gmpy = import_module('gmpy', min_module_version='1.03',
    module_version_attr='version', module_version_attr_call_args=())

This tells it to check the version of gmpy using gmpy.version(), and to import it only if it’s at least 1.03 (note that this works by the fact that Python lexicographically compares strings and tuples, so '1.02' < '1.03' returns True).

The sympy.external module is completely independent of the rest of SymPy (it does not call sympy/, so you can use it even outside of sympy without the performance penalty that importing all of sympy might bring.

So, hopefully this is the last issue to fix for the 0.7.0 release. You can still test it at If people want, I will create another release candidate. Otherwise, I will release 0.7.0 final on Monday (barring any further problems).

Fixing bugs in the release candidate

June 17, 2011

This week, I mostly worked on fixing bugs that people found in the release candidate I released last week. I discovered right after releasing it that it did not work in Python 2.4 or 2.5 because of a syntax error. So I fixed those and created SymPy 0.7.0.rc2 (source; windows installer).

Things were going pretty smoothly with that, until Renato Coutinho discovered that there were test failures in Python 2.4 on Windows, and that the tests failed if ran twice in the same session (i.e., by using test() in isympy). I was able to fix the failures resulting from two sessions, which were almost all caused by a test modifying some global value or being written in a way that did not allow it to pass when run twice. But I do not have a Windows machine, so I couldn’t reproduce the Windows failures.

Fortunately, yesterday, Renato, Chris Smith, and I were able to debug the problems over IRC. The main problem was that sympy/core/ had code

INF = 1e1000
MINUS_INF = -1e1000

These were intended to give float('inf') and float('-inf'), respectively (floating point infinity and negative infinity). And they did do this… on all platforms except for Python 2.4 on Windows. From what I can tell from what Chris told me, on that platform it instead gives 1.0. Strangely, float('inf') did not give floating point infinity either. We discovered that float(mpmath_inf) did give floating point infinity, where mpmath_inf is mpmath’s infinity. This of course also works in all other platforms, so changing it made the code work everywhere.

After that, there was only one test failure left in Windows (originally there were dozens of errors, but all but one were caused by the above problem). It turns out that the subprocess module from the codegen module was causing the test runner to fail entirely. Our solution was to skip this test completely in Python 2.4 on Windows.

So now we had all tests passing on all platforms with all ground types, even if run twice from within the same session.

But it turned out there was still one more error lurking, found by Renato. A bunch of mpmath tests failed in Python 2.4 when gmpy was installed. I had never gotten gmpy to compile, so I have only had it installed on my machine for those Python executables installed by fink (2.5-2.7, 64-bit).

It turns out that mpmath uses gmpy if it is installed. There were a few places in the code where it was doing things like line 1947 of mpmath/libmp/, shown below:

return mpf_pos(small_factorial_cache[n-1], prec, rnd)

The problem was that n was an mpz, or gmpy integer. But Python 2.4 and ealier do not support non-int types as the index to lists. It wasn’t until Python 2.5’s __index__ method that non-int/long types were able to be used as slice indices.

This idiom was being used in several places throughout the code, so rather than try to patch them all, we decided to just disable the gmpy backend in mpmath for Python 2.4. This issue had never come up before to my knowledge, so it seems that he was the first person to run the mpmath tests in Python 2.4 with gmpy installed.

So now tests should be passing everywhere in the 0.7.0 branch. Please continue to test it, though. If it will make it easier to test, I will create a 0.7.0.rc3 with all the latest fixes. Otherwise, barring any further major fixes, I will release 0.7.0 final in about a week.

And a big thanks for Renato Coutinho and Chris Smith for helping me debug and fix these bugs.

SymPy 0.7.0.rc1 is out!

June 13, 2011

My blog post is a little late this week because I wanted to be able to announce this. I have released the first release candidate for SymPy 0.7.0 (see this previous blog post for a little bit of what’s new since the last release). You can download source (Linux/Mac OS X version) here and the Windows installer here. Please download it at test it.

Unfortunately, I discovered soon after releasing it that the code does not work in Python 2.4 or 2.5. I will soon release SymPy 0.7.0.rc2 to fix this. Please test it in Python 2.6 and 2.7 until then, or test the 0.7.0 branch of the official SymPy repository.

If everything goes smoothly, then the full release should be out in a week or so. Watch this blog or the official SymPy blog for the announcement.


June 5, 2011

So from Saturday to Wednesday of this week, I was on vacation to the Grand Canyon without my computer. Therefore, I did not do a whole lot with respect to SymPy this week. The vacation was very fun, though. My family and I hiked to the bottom of the Grand Canyon and stayed a day at the bottom in a lodge at Phantom Ranch, then hiked back up. I would highly recommend it to anyone who does not mind doing a little hiking.

Regarding what I did do, other than catching up on the email from when I was gone, I did some more work finishing patches for the release. We are now very close to having a release. All the remaining blocking issues either have patches that need to be reviewed, or decisions that need to be made.

I also did some work on the Risch Algorithm, though it wasn’t very much. One of my favorite ways to “do work” on the code is to stress test risch_integrate() and if I find a bug or find that it runs too slow, see what needs to be done to fix it. This week, I discovered that risch_integrate() has a bit of nondeterminism built into it. Actually, I already knew this, but I recently found an example that demonstrates it very nicely. The problem is that when it builds the extension to integrate the function, risch_integrate() uses .atoms() to get the parts of the expression (for example, expr.atoms(log) gets all the logarithms in expr). But .atoms() returns a set (I believe this is for performance reasons, though I’m not certain). So we get things like

Hover over the code and click on the left-most, “view source” icon (a paper icon with < > over it) to view without breaks. Opens in a new window.

In [1]: a = Add(*(log(x**i) for i in range(10)))

In [2]: a
            ⎛ 2⎞      ⎛ 3⎞      ⎛ 4⎞      ⎛ 5⎞      ⎛ 6⎞      ⎛ 7⎞      ⎛ 8⎞      ⎛ 9⎞
log(x) + log⎝x ⎠ + log⎝x ⎠ + log⎝x ⎠ + log⎝x ⎠ + log⎝x ⎠ + log⎝x ⎠ + log⎝x ⎠ + log⎝x ⎠

In [3]: b = risch_integrate(a, x)

In [4]: b
                ⎛ 4⎞
        45⋅x⋅log⎝x ⎠
-45⋅x + ────────────

This is correct, since we have

In [5]: expand(b.diff(x) - a)
Out[5]: 0

(remember that \log{(x^n)}=n\log{(x)}). The integral can be expressed in terms of any of the logarithms in the expression. It happens to be expressed in terms of \log{(x^4)} because that happened to be the first one that came out of a.atoms(log) during iteration. This is problematic. First, it’s not exactly what is expected. The ideal solution would be if the answer was written in terms of \log{(x)}.

But it’s actually worse than that. Like I mentioned, this is nondeterministic. It depends on the order of iteration through a set, which is not guaranteed to be in any particular order. Indeed, if I run the following in 32-bit Python 2.7 and again in 64-bit Python 2.7), the output is exactly the same except for i = 64 to i = 77.

for i in range(100):
    print risch_integrate(Add(*(log(x**j) for j in range(i))), x)

The actual output seems to follow a pattern, though it’s had to discern exactly what it is. The output for 32-bit is and the output for 64-bit is (sorry, I forgot to print i; just subtract 4 from the line number).

So this has gotten me thinking about how to reduce nondeterminism. Clearly, I need to sort the result of .atoms(), or else risch_integrate() might return a different (though equivalent) result on different platforms. Actually, I’ve seen list(set) return a different result in the same Python session. That means that you could potentially get something like risch_integrate(expr, x) == risch_integrate(expr, x) => False!

The problem is how to sort the atoms. We recently added a sort_key() function that can be passed as a key to sorted(), which is completely deterministic and platform independent. That would solve the determinism problem, but actually, I think this requires more thought. The order that the differential extension is built in can affect not only the form of the resulting antiderivative (though it will always be equivalent, up to a constant), but also the speed with which it is computed. To take an example from issue 2010, the issue about risch_integrate() (you may also recognize this example if you are a regular reader of this blog), the handle_first keyword argument to risch_integrate() affects if it builds the extension tower looking for logarithms first or exponentials first. Whichever comes last is what is integrated first (the tower is integrated from the top to the bottom). If the last extension was an exponential, then it uses the exponential algorithm. If it was a logarithm, then it uses the logarithm algorithm. These are completely different algorithms, and indeed the results can appear in different forms (and sometimes, one will raise NotImplementedError while the other will work because I have implemented the exponential algorithm more completely than the logarithmic one). It also affects the speed because the integrand might be of a different “type” in the different extensions. In the example below, the answers are different because it tries to make the argument of the logarithmic part monic with respect to the exponential or the logarithm, respectively. Also notice the speed difference. This can be exasperated more for integrands of different forms than this one.

In [1]: f = (x*(x + 1)*((x**2*exp(2*x**2) - log(x + 1)**2)**2 +
   ...: 2*x*exp(3*x**2)*(x - (2*x**3 + 2*x**2 + x + 1)*log(x + 1))))/((x +
   ...: 1)*log(x + 1)**2 - (x**3 + x**2)*exp(2*x**2))**2

In [2]: f
          ⎛                          2                                                   ⎞
          ⎜⎛                       2⎞                                                   2⎟
          ⎜⎜     2           2  2⋅x ⎟        ⎛    ⎛           2      3⎞           ⎞  3⋅x ⎟
x⋅(1 + x)⋅⎝⎝- log (1 + x) + x ⋅ℯ    ⎠  + 2⋅x⋅⎝x - ⎝1 + x + 2⋅x  + 2⋅x ⎠⋅log(1 + x)⎠⋅ℯ    ⎠
                         ⎛                                    2⎞                          
                         ⎜   2                  ⎛ 2    3⎞  2⋅x ⎟                          
                         ⎝log (1 + x)⋅(1 + x) - ⎝x  + x ⎠⋅ℯ    ⎠                          

In [3]: risch_integrate(f, x, handle_first='log')
       ⎛              ⎛ 2⎞⎞                   ⎛                ⎛ 2⎞⎞                             
       ⎜log(1 + x)    ⎝x ⎠⎟                   ⎜  log(1 + x)    ⎝x ⎠⎟          ⎛ 2⎞               
    log⎜────────── + ℯ    ⎟                log⎜- ────────── + ℯ    ⎟       2  ⎝x ⎠               
       ⎝    x             ⎠                   ⎝      x             ⎠      x ⋅ℯ    ⋅log(1 + x)    
x + ─────────────────────── - log(1 + x) - ───────────────────────── + ──────────────────────────
               2                                       2                                        2
                                                                              2           3  2⋅x 
                                                                       - x⋅log (1 + x) + x ⋅ℯ    

In [4]: risch_integrate(f, x, handle_first='exp')
       ⎛                ⎛ 2⎞⎞                   ⎛                ⎛ 2⎞⎞        ⎛ 2⎞             
       ⎜                ⎝x ⎠⎟                   ⎜                ⎝x ⎠⎟        ⎝x ⎠             
    log⎝log(1 + x) + x⋅ℯ    ⎠                log⎝log(1 + x) - x⋅ℯ    ⎠     x⋅ℯ    ⋅log(1 + x)  
x + ───────────────────────── - log(1 + x) - ───────────────────────── - ──────────────────────
                2                                        2                                    2
                                                                            2           2  2⋅x 
                                                                         log (1 + x) - x ⋅ℯ    

In [5]: %timeit risch_integrate(f, x, handle_first='log')
1 loops, best of 3: 1.49 s per loop

In [6]: %timeit risch_integrate(f, x, handle_first='exp')
1 loops, best of 3: 1.21 s per loop

In [7]: cancel(risch_integrate(f, x, handle_first='log').diff(x) - f)
Out[7]: 0

In [8]: cancel(risch_integrate(f, x, handle_first='exp').diff(x) - f)
Out[8]: 0

So what I think I really need to do is to do some research on what order of building the tower makes it the most efficient. Also, handle_first needs to be modified to be more dynamic than just looking at exponentials or logarithms first, but also considering which exponentials or logarithms to look at first, and the others might be rewritten in terms of those (this needed to be done anyway to make it work for three types of extensions: exponentials, logarithms, and tangents).

There can also be more heuristics for this. Currently, there are heuristics for exponentials to prefer rewriting e^{2x} as \left({e^{x}}\right)^2 instead of rewriting e^{x} as \sqrt{e^{2x}} (this is necessary not only for keeping things in terms of the nicer looking gcds but also because risch_integrate() doesn’t know how to handle algebraic extensions like square roots). I didn’t realize it at the time, but the corollary heuristic for logarithms should try to rewrite \log{(x^2)} in terms of \log{(x)} and not the other way around. We can use the exact same gcd algorithm (called integer_powers() in, and I now realize that it should actually be called integer_multiples()) as we do for the exponential, only use the powers of the arguments instead of coefficients. This might require some factorization to do completely correctly, so it certainly requires some thought.

I discovered that there’s an easier way to show the nondeterminism of the above than running it on different architectures. You just have to change the variable of integration:

In [1]: a = Add(*(log(x**i) for i in range(10)))

In [2]: risch_integrate(a, x)
                ⎛ 4⎞
        45⋅x⋅log⎝x ⎠
-45⋅x + ────────────

In [3]: b = Add(*(log(y**i) for i in range(10)))

In [4]: risch_integrate(b, y)
Out[4]: -45⋅y + 45⋅y⋅log(y)

In [5]: c = Add(*(log(z**i) for i in range(10)))

In [6]: risch_integrate(c, z)
                ⎛ 2⎞
        45⋅z⋅log⎝z ⎠
-45⋅z + ────────────

Clearly the code for this needs to be doing some canonicalization.

%d bloggers like this: