Integration of primitive functions

July 31, 2010

Integration of Primitive Functions

So this past week, I had another break through in my project. The first break through, as you may recall, was the completion of the integrate_hyperexponential() function, which allowed for the integration in hyperexponential extensions, including proving the nonexistence of elementary integrals. Now I have worked my way up to this level on the other major half of the integration algorithm (actually, major third; more on that later): integration of primitive elements.

This time, I can refer you to my previous blog post for definitions. The chief thing here is that there is now a function in my integration3 branch called integrate_primitive(), and it is used primarily for integrating functions with logarithms.

So, how about some examples? The first one comes from Algorithms for computer algebra By Keith O. Geddes, Stephen R. Czapor, George Labahn (example 12.8). I like it because it contains both exponentials and logarithms, in a way that they do not depend on each other, so it can be integrated with either integrate_primitive() or integrate_hyperexponential(). In either case, the polynomial part is \frac{x}{x + 1}, so recursively calling the other function is not required. (for those of you who have been following my integration3 branch, you may notice that this is blatantly taken from the commit history).

Hover over the code and click on the left-most, “view source” icon (a paper icon with < > over it) to view without breaks. Opens in a new window.

In [1]: from sympy.integrals.risch import integrate_primitive,

In [2]: f = (x*(x + 1)*((x**2*exp(2*x**2) - log(x + 1)**2)**2 +
2*x*exp(3*x**2)*(x - (2*x**3 + 2*x**2 + x + 1)*log(x + 1))))/((x +
1)*log(x + 1)**2 - (x**3 + x**2)*exp(2*x**2))**2

In [3]: f
          ⎛                          2                                                   ⎞
          ⎜⎛                       2⎞                                                   2⎟
          ⎜⎜     2           2  2⋅x ⎟        ⎛    ⎛           2      3⎞           ⎞  3⋅x ⎟
x⋅(1 + x)⋅⎝⎝- log (1 + x) + x ⋅ℯ    ⎠  + 2⋅x⋅⎝x - ⎝1 + x + 2⋅x  + 2⋅x ⎠⋅log(1 + x)⎠⋅ℯ    ⎠
                         ⎛                                    2⎞
                         ⎜   2                  ⎛ 2    3⎞  2⋅x ⎟
                         ⎝log (1 + x)⋅(1 + x) - ⎝x  + x ⎠⋅ℯ    ⎠

In [4]: var('t0, t1')
Out[4]: (t₀, t₁)

In [5]: a, d = map(lambda i: Poly(i, t1), f.subs(exp(x**2),
t0).subs(log(x + 1), t1).as_numer_denom())

In [6]: a
Poly((x + x**2)*t1**4 + (-2*t0**2*x**3 - 2*t0**2*x**4)*t1**2 +
(-2*t0**3*x**2 - 4*t0**3*x**3 - 6*t0**3*x**4 - 8*t0**3*x**5 -
4*t0**3*x**6)*t1 + 2*t0**3*x**3 + 2*t0**3*x**4 + t0* *4*x**5 +
t0**4*x**6, t1, domain='ZZ[x,t0]')

In [7]: d
Out[7]: Poly((1 + 2*x + x**2)*t1**4 + (-2*t0**2*x**2 - 4*t0**2*x**3 -
2*t0**2*x**4)*t1**2 + t0**4*x**4 + 2*t0**4*x**5 + t0**4*x**6, t1,

In [8]: D = [Poly(1, x), Poly(2*x*t0, t0), Poly(1/(x + 1), t1)]

In [9]: r = integrate_primitive(a, d, D, [x, t0, t1], [lambda x: log(x +
1), lambda x: exp(x**2)])

In [10]: r
⎛   ⎛                ⎛ 2⎞⎞      ⎛                ⎛ 2⎞⎞        ⎛ 2⎞                                ⎞
⎜   ⎜                ⎝x ⎠⎟      ⎜                ⎝x ⎠⎟        ⎝x ⎠                ⌠               ⎟
⎜log⎝log(1 + x) + x⋅ℯ    ⎠   log⎝log(1 + x) - x⋅ℯ    ⎠     x⋅ℯ    ⋅log(1 + x)     ⎮   x           ⎟
⎜───────────────────────── - ───────────────────────── - ────────────────────── + ⎮ ───── dx, True⎟
⎜            2                           2                                    2   ⎮ 1 + x         ⎟
⎜                                                           2           2  2⋅x    ⌡               ⎟
⎝                                                        log (1 + x) - x ⋅ℯ                       ⎠

An explanation: f is the function we are integrating. Preparsing is not implemented yet, so we have to do it manually in [5]. [8] is the list of derivations of the monomials we are working with, [x, t0, t1], which represent x, e^{x^2}, and \log{(x + 1)}, respectively. Because the outermost monomial is a logarithm (primitive), we call integrate_primitive() on it. The last argument of the function is the back substitution list, in reverse order because that is the order we have to back substitute in. We can see the result contains an unevaluated Integral. This is because the recursive calls to integrate over the smaller extensions have not yet been implemented. In the final version, integrate() will automatically call ratint() in this case on it to give the complete answer. The second argument of the result, True, indicates that the integral was elementary and that this is the complete integral.

Because the extensions did not depend on each other, we could have also integrated in \mathbb{Q}(x, \log{(x + 1)}, e^{x^2}) instead of \mathbb{Q}(x, e^{x^2}, \log{(x + 1)}):

In [11]: a1, d1 = map(lambda i: Poly(i, t0), f.subs(exp(x**2), t0).subs(log(x + 1), t1).as_numer_denom())

In [12]: D1 = [Poly(1, x), Poly(1/(x + 1), t1), Poly(2*x*t0, t0)]

In [13]: r1 = integrate_hyperexponential(a1, d1, D1, [x, t1, t0], [lambda x: exp(x**2), lambda x: log(x + 1)])

In [14]: r1
⎛   ⎛              ⎛ 2⎞⎞      ⎛                ⎛ 2⎞⎞                                                ⎞
⎜   ⎜log(1 + x)    ⎝x ⎠⎟      ⎜  log(1 + x)    ⎝x ⎠⎟          ⎛ 2⎞                                  ⎟
⎜log⎜────────── + ℯ    ⎟   log⎜- ────────── + ℯ    ⎟       2  ⎝x ⎠                  ⌠               ⎟
⎜   ⎝    x             ⎠      ⎝      x             ⎠      x ⋅ℯ    ⋅log(1 + x)       ⎮   x           ⎟
⎜─────────────────────── - ───────────────────────── + ────────────────────────── + ⎮ ───── dx, True⎟
⎜           2                          2                                        2   ⎮ 1 + x         ⎟
⎜                                                             2           3  2⋅x    ⌡               ⎟
⎝                                                      - x⋅log (1 + x) + x ⋅ℯ                       ⎠

We can verify by taking the derivative that the results in each case are antiderivatives of the original function, f, even though they appear different.

In [15]: cancel(r[0].diff(x) - f)
Out[15]: 0

In [16]: cancel(r1[0].diff(x) - f)
Out[16]: 0

We can see in each case, the remaining unevaluated Integral was in \mathbb{Q}(x) only, meaning that the recursive call to integrate_hyperexponential() or integrate_primitive(), respectively, would not have been necessary. Finally, we can see that choosing the correct extension to integrate over can make a difference, time wise:

In [17]: %timeit integrate_primitive(a, d, D, [x, t0, t1], [lambda x: log(x + 1), lambda x: exp(x**2)])
1 loops, best of 3: 1.91 s per loop

In [18]: %timeit integrate_hyperexponential(a1, d1, D1, [x, t1, t0], [lambda x: exp(x**2), lambda x: log(x + 1)])
1 loops, best of 3: 2.63 s per loop

Just as with the exponential case, the function can prove the integrals are non-elementary. This is the so-called logarithmic integral:

In [19]: f1 = 1/log(x)

In [20]: a, d = map(lambda i: Poly(i, t1), f1.subs(log(x), t1).as_numer_denom())

In [21]: a
Out[21]: Poly(1, t1, domain='ZZ')

In [22]: d
Out[22]: Poly(t1, t1, domain='ZZ')

In [23]: integrate_primitive(a, d, [Poly(1, x), Poly(1/x, t1)], [x, t1], [log])
Out[23]: (0, False)

The second argument, False, indicates that the integral was non-elementary. Namely, the function has proven that the function f - D(0) = \frac{1}{\log{(x)}} does not have an elementary anti-derivative over \mathbb{Q}(x, \log{(x)}) (see the previous post for more information).

Finally, be aware that, just as with integrate_hyperexponential() many integrals will raise NotImplementedError, because the subroutines necessary to solve them have not yet been finished.

In [25]: f = log(log(x))**2

In [26]: f.diff(x)

In [27]: a, d = map(lambda i: Poly(i, t1),
cancel(f.diff(x)).subs(log(x), t0).subs(log(t0), t1).as_numer_denom())

In [28]: a
Out[28]: Poly(2*t1, t1, domain='ZZ')

In [29]: d
Out[29]: Poly(t0*x, t1, domain='ZZ[x,t0]')

In [30]: D = [Poly(1, x), Poly(1/x, t0), Poly(1/(x*t0), t1)]

In [31]: integrate_primitive(a, d, D, [x, t0, t1], [lambda x: log(log(x)), log])
NotImplementedError: Remaining cases for Poly RDE not yet implemented.

Now one thing that I want to add from the above examples taken from the commit message is that logarithms are not the only function that are primitive. The Li function (the logarithmic integral, as above), considered as an elementary extension of \mathbb{Q}(x, \log{(x)}) is also primitive. But even among the commonly defined elementary functions, there is one other, acrtangents.

In [32]: diff(atan(x)**2, x)
  1 + x  

In [33]: integrate_primitive(Poly(2*t, t), Poly(1 + x**2, t), [Poly(1, x), Poly(1/(1 + x**2), t)], [x, t], [atan])

⎛    2         ⎞
⎝atan (x), True⎠

In [34]: integrate_primitive(Poly(t, t), Poly(x, t), [Poly(1, x), Poly(1/(1 + x**2), t)], [x, t], [atan])

⎛⌠                  ⎞
⎜⎮ atan(x)          ⎟
⎜⎮ ─────── dx, False⎟
⎜⎮    x             ⎟
⎝⌡                  ⎠

Due to a bug in the code right now, the final version returns the non-elementary integral in the final result. Suffice it to say that it has proven that \int {\frac{\arctan{(x)}}{x} dx} is non-elementary. As far as I know, this isn’t any special function. Actually, it’s just a random function containing arctan that looked non-elementary to me that I plugged in and found out that I was correct. It’s very similar in form to the exponential integral (Ei) or the Sine/Cosine Integral (Si/Ci), which is how I guessed that it would be non-elementary. Maybe it should be called ATi().

Status Update

So it has come to my attention that the suggested “pencils down” date is one week from Monday, and the hard “pencils down” date is two weeks from Monday (see the Google Summer of Code Timeline). Now, no matter how fast I work, my work cannot be pushed in until Mateusz’s latest polys branch gets pushed in, because my work is based on top of it. I plan on continuing work on the integration algorithm beyond the summer until I finish the transcendental part of the algorithm, and even after that, I want to look into implementing other integration related things, like definite integration using Meijer G-functions, and the algebraic part of the algorithm. But for now, these are the things that I need to do for the transcendental part, which is this summer’s work:

1. Implement the preparsing algorithms. This part is two-fold. First, I need to implement algorithms based on the Risch Structure Theorems, which allow me to determine if an extension is algebraic or not (if it is algebraic, we cannot integrate it because only the transcendental part is implemented). The other part will be the function that actually goes through an expression and tries to build up a differential extension from it so it can be integrated. This can be a tricky part. For example, if we want to integrate f = e^x + e^{\frac{x}{2}}, we want to first choose t_1=e^{\frac{x}{2}} so that f = t_1^2 + t_1, because if we choose t_1=e^x, then t_2=e^{\frac{x}{2}}=\sqrt{t_1} will be algebraic over \mathbb{Q}(x, t_1). This is one case where we might try adding an algebraic extensions but where it can be avoided. The solution will have to be to go through and find the common denominators of the exponentials. I’m also considering that this might happen in more advanced ways, so it could be necessary for the function to backtrack in the extension tree to see if it can do it in an entirely transcendental way. Fortunately, the Risch Structure Theorems give us a decision procedure for determining if an extension can be written in terms of the previous extensions (is algebraic over it), but this will still be a very hard function to get right.

2. Finish the remaining cases for integrate_hyperexponential() and integrate_primitive(). As you could see in this post, as well as in the previous one, there are many integrals that cannot yet be integrated because the special cases for them have not been implemented yet. Most of these actually rely on implementing the structure theorem algorithms from 1, and implementing them once that is finished will not take long, because they will just be straight copying of the pseudocode from Bronstein’s book. But some of them, particularly ones from the primitive case, are not spelt out so well in Bronstein’s book, and will require more thinking (and thus time) on my part. I should note that the Structure Theorem algorithms are also this way.

3. Implement the hypertangent case. The ability to integrate in tangent extensions is the other third I mentioned above. Since tangents require more special casing, I plan on doing this only after I have finished 1 and 2. This is actually not much work, because most of the algorithms for solving the particular subproblem for tangents (called the Coupled Risch Differential Equation) are exactly the same as those for solving the subproblem for hyperexponentials (the Risch Differential Equation), which are already (mostly) implemented in the hyperexponential part. There are only a few extra functions that need to be written for it. Also, you will still be able to integrate functions that contain tangents, such as e^{\tan{(x)}} (recall last time that we showed that integrate_hyperexponential() can prove that this does not have an elementary integral). It just won’t be able to integrate when the top-most extension is a tangent.

So here is what I plan on doing. Right now, I am going to focus my work on 1, since most of 2 can’t be done until it is anyway. But more importantly, I want to have a prototype user-level function for the Risch Algorithm. The reason I want this is so that people can try it out, without having to do the preparsing like I did above, but rather they can just call risch_integrate(f, x), and it will return the integral of f, prove that it is non-elementary and reduce it into the elementary and non-elementary parts, or explain why it cannot do it (either because the function is not transcendental or because something is not implemented yet). My chief desire for doing this is so that people can try out my code and find the bugs in it for me. I have already found many critical errors in the code (returns a wrong result), and I want to iron these out before anything goes in. The best way to do this will be to release a working user-level function and hope that people try it out for me.

Also, even if 2 and 3 are not finished, if I have 1, I can integrate it with integrate() (no pun intended) and just have it bail if it raises NotImplementedError I will need to come up with a way to differentiate between this and the case where it returns an unevaluated Integral because it has proven that an elementary antiderivative does not exist. Any suggestions?

I plan on continuing work after the summer until I finish 1 through 3, though I won’t pretend that my work won’t slow down considerably when I start classes in August. I also promise to finish the Risch Algorithm posts that I promised.

And for what it’s worth, I plan on working my ass off this next two weeks.

The Risch Algorithm: Part 2, Elementary Functions

July 24, 2010

In Part 1 of this series of blog posts, I gave what I believed to be the prerequisites to understanding the mathematics behind the Risch Algorithm (aside from a basic understanding of derivatives and integrals from calculus). In this post, I will elaborate on what is meant by “elementary function,” a term that is thrown around a lot when talking about Risch integration.

The usual definition of elementary function given in calculus is any function that is a constant, a polynomial, an exponential (e^x, 2^x), a logarithm (\ln({x}), \log_{10}({x})), one of the standard trig functions or their inverses (sin, cos, tan, arcsin, arccos, arctan, etc.), and any combination of these functions via addition, subtraction, multiplication, division, taking powers, and composition. Thus, even a function as crazy as is elementary, by this definition.

But for the rigorous definition of an elementary function, we must take into consideration what field we are working over. Before I get into that, I need some definitions. Suppose that k is the field we are working over. You can imagine that k=\mathbb{Q}(x), the field of rational functions in x with rational number coefficients. As with the previous post, imagine t as a function, for example, t = f(x). Let K be a differential extension of k. We have not defined this, but it basically means that our derivation D works the same in K as it does in k. You can imagine here that K=k[t].

We say that t \in K is a primitive over k if Dt \in k. In other words, the derivative of t is does not contain t, only elements of k. Obviously, by the definition of a derivation (see the last post in the series), any element of k is a primitive over K, because the derivative of any element of a field is again an element of that field (you can see this by the definition of a derivation, also given in the last post). But also if t=log(a) for some a \in k, then t is a primitive over k, because Dt=\frac{Da}{a}\in k.

We say that t \in K^* is a hyperexponential over k if \frac{Dt}{t}\in k. Written another way, Dt=at for some a\in k. We know from calculus that the functions that satisfy differential equations of the type \frac{dy}{dx}=ay are exactly the exponential functions, i.e., y=e^{\int{a\ dx}}.

The last class of functions that needs to be considered is algebraic functions. I will not go into depth on algebraic functions, because my work this summer is only on integrating purely transcendental functions. Therefore, the only concern we shall have with algebraic functions in relation to the integration algorithm is to make sure that whatever function we are integrating is not algebraic, because the transcendental algorithms will not be valid if they are. Hopefully in a future post I will be able to discuss the Risch Structure Theorems, which give necessary and sufficient conditions for determing if a Liouvillian function (see next paragraph) is algebraic.

Now, we say that a function t \in K is Liouvillian over k if t is algebraic, a primitive, or a hyperexponential over k. For t\in K to be a Liouvillian monomial over k, we have the additional condition that \mathrm{Const}(k) = \mathrm{Const}(k(t)). This just means that we cannot consider something like \log({2}) over \mathbb{Q} as a Liouvillian monomial. Otherwise (I believe) we could run into undecidability problems.

We call t \in K a logarithm over k if Dt=\frac{Db}{b} for some b \in k^*, i.e., t=\log({b}). We call t \in K^* an exponential over k if \frac{Dt}{t}=Db (or Dt=tDb) for some b \in k, i.e., t=e^b. Note the difference between an exponential monomial and a hyperexponential monomial.

We can finally give the rigorous definition of an elementary extension. K is an elementary extension of k if there are t_1, \dots, t_n \in K such that K=k(t_1,\dots,t_n) and t_i is elementary over k(t_1, \dots, t_{i-1}) for all i \in \{1,\dots,n\}. An elementary function is any element of an elementary extension of \mathbb{C}(x) with the derivation D=\frac{d}{dx}. A function f\in k has an elementary integral over k if there exists an elementary extension K of k and g\in K such that Dg=f, i.e., f=\int{g}.

Usually, we start with \mathbb{Q}(x), the field of rational functions in x with rational number coefficients. We then build up an elementary extension one function at a time, with each function either being a logarithm or exponential of what we have already built up, or algebraic over it. As I noted above, we will ignore algebraic functions here. We generally start with \mathbb{Q} because it is computable (important problems such as the zero equivalence problem or the problem of determining certain field isomorphisms are decidable), but the above definition lets us start with any subfield of \mathbb{C}.

Now you may be wondering: we’ve covered algebraic functions, exponentials and logarithms, and obviously rational functions are elements of \mathbb{Q}(x), but what about trigonometric functions? Well, from a theoretical stand point, we can make our lives easier by noticing that all the common trigonometric functions can be represented as exponentials and logarithms over \mathbb{Q}(i). For example, \cos{x} = \frac{e^{ix} + e^{-ix}}{2}. You can see here that all the common trig functions can be represented as complex exponentials or logarithms like this. However, from an algorithmic standpoint, we don’t want do convert all trig expressions into complex exponentials and logarithms in order to integrate them. For one thing, our final result will be in terms of complex exponentials and logarithms, not the original functions we started with, and converting them back may or may not be an easy thing to do. Also, aside from the fact that we have different functions than we were expecting, we also will end up with an answer containing \sqrt{-1}, even if our original integrand did not.

Fortunately, the integrating tangents directly is a solved problem, just like integrating algebraic, exponential, or logarithmic functions is solved. We can’t integrate functions like \sin{x} or \cos{x} directly as monomials like we can with \tan{x} or e^x, because the derivatives of sin and cos are not polynomials in their respective selves with coefficients in \mathbb{C}(x). However, we can use a trick or two to integrate them. One way is to rewrite \cos{x}=\frac{1 - \tan^2{\frac{x}{2}}}{1 + \tan^2{\frac{x}{2}}} and proceed to integrate it as a tangent. Another alternative is to write \cos{x}=\frac{1}{\sec{x}}=\sqrt{\frac{1}{\sec^2{x}}}=\sqrt{\frac{1}{\tan^2{x} + 1}}. This function is algebraic over \mathbb{Q}(x, \tan{(x)}), but if we do not already have \tan{x} in our differential extension, it is transcendental, and we can rewrite it as e^{-\frac{\log{(1 + \tan^2{x})}}{2}} (this is used in Bronstein’s text, so I believe what I just said is correct, though I haven’t verified it with the structure theorems just yet). These both work using the relevant identities for sin too. Of course, there is still the problem of rewriting the final integrand back in terms of sin or cos. Otherwise, you will get something like \frac{2e^x\tan({\frac{x}{2}}) - \tan^2({\frac{x}{2}})e^x + e^x}{2 + 2\tan^2({\frac{x}{2}})} instead of \frac{e^x(\sin{(x)} + \cos{(x)})}{2} for \int{\cos{(x)}e^xdx}. Bronstein doesn’t elaborate on this too much in his book, so it is something that I will have to figure out on my own.

The second option I gave above leads nicely into the main point I wanted to make here about elementary functions. Notice that everywhere in the definitions above, things depend on the field we are working in. Therefore, e^{\tan{x}} cannot be an elementary extension over \mathbb{Q}(x), but it can be over \mathbb{Q}(x, \tan{x}). Also, the error function, defined as \mathrm{erf}{(x)} = \frac{2}{\sqrt{\pi}}\int{e^{-x^2}dx} cannot be an elementary extension over \mathbb{Q}(x), but it can over \mathbb{Q}(x, e^{-x^2}). In fact this is how we can integrate in terms of some special functions, including the error function: by manually adding e^{-x^2} (or whatever) to our differential extension. Therefore, the usual definition of an elementary anti-derivaitve and the above Risch Algorithm definition of an elementary integral coincide only when the extension consists only of elementary functions of the form of the usual definition (note that above, our final fields are \mathbb{Q}(x, \tan{x}, e^{\tan{x}}) and \mathbb{Q}(x, e^{-x^2}, \mathrm{erf}{(x)}), respectively).

Originally, I was also going to talk about Liouville’s Theorem in this blog post, but I think it has already gotten long enough (read “I’m getting tired”), so I’ll put that off until next time.

A hard week

July 17, 2010

After last week’s breakthrough, work this week has been very slow. I started working on the Parametric Risch Differential Equation Problem, which is almost identical to the Risch Differential Equation Problem in how it is solved, except there are a few extra steps. Unfortunately, because it is so similar, Bronstein breezes through the description. This is fine for the parts that are the same, but he is a little unclear on how some of the new parts fit in. Also, his pseudocode has a line more or less saying

if r1 = … = rn = 0 then
    N = -1
    N = max(deg(r1), …, deg(rn))

for i from 0 to N
    for j from 1 to m
        Mij = coefficient(rj, t^i)

where M is a matrix. It is not very clear what this is supposed to mean in the case where N = -1. Obviously, you can’t have a a matrix with negative dimensions. Clearly, this means that this particular function doesn’t apply somehow in this case, but I am not really even sure where it fits in to the whole algorithm at this point in reading. After reading a few more pages in, it gives a few hints here and there on how it is to be used, but never is it explicitly shown, in pseudocode or otherwise. So for now, I think my best bet is to read ahead and get a fuller understanding of the complete function before I try implementing anything (this is what I had been doing before, but I caught up to myself).

Also, on an unrelated note, I just found out today that I passed my Google Summer of Code midterm evaluation. This means that I will receive half of my stipend for the program (the other half comes after passing the final evaluation at the end of the summer), and that I can continue working on my project in the program.


Later in the text, it runs through an example and says “… dc = -1, hence M and A are 0 by 0 matrices.” So obviously, that is what was meant.

Integration of exponential functions

July 12, 2010

So for the first time this summer, I missed my blogging deadline. I have been on vacation for the past few weeks, and have spent a good bit of the last week in the car, driving home. But that’s not my excuse. I was on vacation the week before, when I wrote up my lengthy blog post on the Risch Algorithm. My excuse is that I wanted to finish up my integrate_hyperexponential() function before I posted, so I could write about it. Well, I finished it on Thursday (today is Sunday, the post was due Friday), but I ran into unexpected bugs (imagine that) that has postponed it actually working until now. I also ended up doing API changes 3 different times (they are basically incrementally all one change, from supporting only one extension to properly supporting multiple extensions. Look for long commits in my recent commit history in my branch if you are interested).

So here is the function. It integrates exponential functions. You still have to manually create the differential extension, as before. Here are some examples. You can try them in my integration2 branch (I have rebased over Mateusz’s latest polys9update. The latest branch is always integrationn, where n is the largest integer available).

Hover over the code and click on the left-most, “view source” icon (a paper icon with < > over it) to view without breaks. Opens in a new window.

In [1]: from sympy.integrals.risch import *

In [2]: var('t1, t')
Out[2]: (t₁, t)

In [3]: r = exp(2*tan(x))*tan(x) + tan(x) + exp(tan(x))

In [4]: r
 2⋅tan(x)                    tan(x)
ℯ        ⋅tan(x) + tan(x) + ℯ      

In [5]: rd = r.diff(x)

In [6]: rd
    ⎛         2   ⎞  2⋅tan(x)             2      ⎛       2   ⎞  2⋅tan(x)   ⎛       2   ⎞  tan(x)
1 + ⎝2 + 2⋅tan (x)⎠⋅ℯ        ⋅tan(x) + tan (x) + ⎝1 + tan (x)⎠⋅ℯ         + ⎝1 + tan (x)⎠⋅ℯ      

In [7]: a, d = map(lambda i: Poly(i, t), rd.subs(tan(x), t1).subs(exp(t1), t).as_numer_denom()) # Manually create the extension

In [8]: a
Out[8]: Poly((1 + 2*t1 + t1**2 + 2*t1**3)*t**2 + (1 + t1**2)*t + 1 + t1**2, t, domain='ZZ[t1]')

In [9]: d
Out[9]: Poly(1, t, domain='ZZ')

In [10]: integrate_hyperexponential(a, d, [Poly(1, x), Poly(1 + t1**2, t1), Poly((1 + t1**2)*t, t)], [x, t1, t], [lambda x: exp(tan(x)), tan])

⎛                   ⌠                                 ⎞
⎜ 2⋅tan(x)          ⎮ ⎛       2   ⎞       tan(x)      ⎟
⎜ℯ        ⋅tan(x) + ⎮ ⎝1 + tan (x)⎠ dx + ℯ      , True⎟
⎝                   ⌡                                 ⎠

We have to manually build up the differential extension ([7]). The first element is x, which is already there. Next, we add t_1 = \tan{x}, and finally t = e^{\tan{x}} = e^{t_1}. The third argument of integrate_hyperexponential() is what gives these variables their identities: their derivatives. The fourth argument is the list of the extension symbols, and the last argument is a list of the functions for which the symbols stand for, in reverse order (because we have to back substitute in the solution in reverse order).

The unevaluated Integral in the solution is due to the recursive nature of the Risch algorithm. Eventually, an outer function in the algorithm will recursively integrate until it reaches the ground field, \mathbb{Q}. It will also do the proper preparsing automatically as well. The second element of the solution, True, indicates that the integral is elementary, and thus the given solution is the complete integral of the original integrand, which we can see (\int (1 + \tan^2{x})dx=\tan{x}).

Another example:

In [1]: from sympy.integrals.risch import *

In [2]: var('t')
Out[2]: (t,)

In [3]: rd = exp(-x**2)

In [4]: rd

In [5]: a, d = map(lambda i: Poly(i, t), rd.subs(exp(x**2), t).as_numer_denom())

In [6]: a
Out[6]: Poly(1, t, domain='ZZ')

In [7]: d
Out[7]: Poly(t, t, domain='ZZ')

In [8]: integrate_hyperexponential(a, d, [Poly(1, x), Poly(2*x*t, t)], [x, t], [lambda x: exp(x**2)])

Out[8]: (0, False)

Here the second argument of the solution is False, which indicates that the algorithm has proven that the integral of e^{-x^2} is not elementary! The first argument 0 indicates that actually it is the integral of e^{-x^2} - \frac{d}{dx}(0) that is not elementary, i.e., the Risch algorithm will reduce an integrand into an integrated function part and non-elementary part. For example:

In [1]: from sympy.integrals.risch import *

In [2]: var('t1, t')
Out[2]: (t₁, t)

In [3]: rd = exp(x)/tan(x) + exp(x)/(1 + exp(x))

In [4]: rd
   x        x  
  ℯ        ℯ   
────── + ──────
     x   tan(x)
1 + ℯ          

In [5]: a, d = map(lambda i: Poly(i, t), rd.subs(exp(x), t).subs(tan(x), t1).as_numer_denom())

In [6]: a
Out[6]: Poly(t**2 + (1 + t1)*t, t, domain='ZZ[t1]')

In [7]: d
Out[7]: Poly(t1*t + t1, t, domain='ZZ[t1]')

In [8]: integrate_hyperexponential(a, d, [Poly(1, x), Poly(1 + t1**2, t1), Poly(t, t)], [x, t1, t], [exp, tan])
⎛   ⎛     x⎞       ⎞
⎝log⎝1 + ℯ ⎠, False⎠

This indicates that the integral of (\frac{e^x}{\tan{x}} + \frac{e^x}{1 + e^x}) - \frac{d}{dx}(\log{(1 + e^x)}) = \frac{e^x}{\tan{x}} is not elementary. That is one advantage that the new algorithm will have over the present one. Currently, the present algorithm just returns an unevaluated Integral for the above rd, but the new one will be able to return \log{(1 + e^x)} + \int{\frac{e^x}{\tan{x}}dx}. It will be able to do this even if rd were rewritten as \frac{e^x \tan{x} + e^x + e^{2x}}{e^x \tan{x} + \tan{x}} (notice that this is exactly what .as_numer_denom() is doing anyway in [5], as you can see in [6] and [7]). Furthermore, it will have actually proven that the remaining \int{\frac{e^x}{\tan{x}}dx} is non-elementary. I plan on having some kind of marker in the pretty printed unevaluated Integral to indicate this. Suggestions on what this should be are welcome.

Finally, the full algorithm appears to be faster (probably asymptotically faster) than the current implementation:

In [1]: from sympy.integrals.risch import *

In [2]: var('t1, t')
Out[2]: (t₁, t)

In [3]: rd = exp(x)*x**4

In [4]: a, d = map(lambda i: Poly(i, t), rd.subs(exp(x), t).as_numer_denom())

In [5]: integrate_hyperexponential(a, d, [Poly(1, x), Poly(t, t)], [x, t], [lambda x: exp(x)])
⎛    x    4  x         x      3  x       2  x      ⎞
⎝24⋅ℯ  + x ⋅ℯ  - 24⋅x⋅ℯ  - 4⋅x ⋅ℯ  + 12⋅x ⋅ℯ , True⎠

In [6]: %timeit integrate_hyperexponential(a, d, [Poly(1, x), Poly(t, t)], [x, t], [exp])
10 loops, best of 3: 28 ms per loop

In [7]: integrate(rd, x)
    x    4  x         x      3  x       2  x
24⋅ℯ  + x ⋅ℯ  - 24⋅x⋅ℯ  - 4⋅x ⋅ℯ  + 12⋅x ⋅ℯ 

In [8]: %timeit integrate(rd, x)
1 loops, best of 3: 218 ms per loop

Of course, keep in mind that I am timing what will be an internal function against a full function. But if you increase the exponent on x, you find that there is no way the addition of preparsing time (which shouldn’t be affected by such a change) will cause it to become as slow as the current integrate(). Like I said, I am pretty sure that it is asymptotic. For example:

In [1]: from sympy.integrals.risch import *

In [2]: var('t1, t')
Out[2]: (t₁, t)

In [3]: rd = exp(x)*x**10

In [4]: a, d = map(lambda i: Poly(i, t), rd.subs(exp(x), t).as_numer_denom())

In [5]: integrate_hyperexponential(a, d, [Poly(1, x), Poly(t, t)], [x, t], [lambda x: exp(x)])
⎛         x    10  x              x           3  x          5  x        7  x       9  x       8  x         6  x           4  x            2  x      ⎞
⎝3628800⋅ℯ  + x  ⋅ℯ  - 3628800⋅x⋅ℯ  - 604800⋅x ⋅ℯ  - 30240⋅x ⋅ℯ  - 720⋅x ⋅ℯ  - 10⋅x ⋅ℯ  + 90⋅x ⋅ℯ  + 5040⋅x ⋅ℯ  + 151200⋅x ⋅ℯ  + 1814400⋅x ⋅ℯ , True⎠

In [6]: %timeit integrate_hyperexponential(a, d, [Poly(1, x), Poly(t, t)], [x, t], [exp])
10 loops, best of 3: 42 ms per loop

In [7]: integrate(rd, x)
         x    10  x              x           3  x          5  x        7  x       9  x       8  x         6  x           4  x            2  x
3628800⋅ℯ  + x  ⋅ℯ  - 3628800⋅x⋅ℯ  - 604800⋅x ⋅ℯ  - 30240⋅x ⋅ℯ  - 720⋅x ⋅ℯ  - 10⋅x ⋅ℯ  + 90⋅x ⋅ℯ  + 5040⋅x ⋅ℯ  + 151200⋅x ⋅ℯ  + 1814400⋅x ⋅ℯ 

In [8]: %timeit integrate(rd, x)
1 loops, best of 3: 2.78 s per loop

There is one thing I should mention. I haven’t implemented all the cases in rischDE(), which is the subproblem for exponential functions (more on this in a future “The Risch Algorithm” post). So some integrals will fail with a NotImplementedError, indicating that there is a function that I still need to implement to solve the integral:

In [1]: from sympy.integrals.risch import *

In [2]: var('t1, t')
Out[2]: (t₁, t)

In [3]: rd = (exp(x) - x*exp(2*x)*tan(x))/tan(x)

In [4]: a, d = map(lambda i: Poly(i, t), rd.subs(exp(x), t).subs(tan(x), t1).as_numer_denom())

In [5]: a
Out[5]: Poly(-t1*x*t**2 + t, t, domain='ZZ[x,t1]')

In [6]: d
Out[6]: Poly(t1, t, domain='ZZ[t1]')

In [7]: integrate_hyperexponential(a, d, [Poly(1, x), Poly(1 + t1**2, t1), Poly(t, t)], [x, t1, t], [exp, tan])
NotImplementedError: The ability to solve the parametric logarithmic derivative problem is required to solve this RDE

So feel free to give this a try and let me know what you think. You will have to do the preparsing as I have done above, which means that you also have to be careful that any extension that you make is not the derivative or logarithmic derivative of an element of the field you have already built up. You also cannot use algebraic functions, as I mentioned before, including things like e^\frac{\log{x}}{2} (functions like these are called the logarithmic derivatives of k(t)-radicals, which I will also discuss in a future “The Risch Algorithm” post). If you just use simple extensions like t1 = tan(x);t=exp(x) like I have above, you won’t need to worry about this. Each derivative Poly should be in the variable that it is the derivative of (e.g., start with Poly(1, x), then add Poly(1 + t1**2, t1), Poly(t2*(1 + t1**2), t2), etc.). Everything else should be a Poly in t, the last element of the extension. And in cause you didn’t get it, the last extension must be an exponential function.

Also, I didn’t have to do it in any of the above examples, but the first and second arguments to integrate_hyperexponential() must be canceled (a, d = a.cancel(d, include=True) will do this for you), or you will get a wrong result! I spent a good day of debugging until I figured this out. The existence of other bugs didn’t help.

%d bloggers like this: