## The Risch Algorithm: Part 1

My work this week isn’t very interesting, even insomuch as my work any week is interesting, so this week I have elected to start a series of blog posts about the Risch Algorithm in general. I will start out with the basics in this post.

Anyone who has taken Calculus knows a handful of heuristics to calculate integrals. u-substitution, partial fractions, integration by parts, trigonometric substitution, and table integration are a few of the more popular ones. These are general enough to work for most integrals that are encountered in problems in Physics, Engineering, and so on, as well as most of those generated by solving differential equations from the same fields. But these fall short in a couple of ways. First off, they are just heuristics. If they fail, it does not mean that no integral exists. This means that they are useless for proving that certain functions, such as $e^{-x^2}$ do not have integrals, no matter how hard you try to find them. Second, they work for only relatively simple functions. For example, suppose you have a rational function in $\log{x}$ and $x$. An example would be $\frac{(\log{x})^2 + 2\log{x} + x^2 + 1}{x\log{x} + 2x^3}$. We are not interested in integrating this function, but rather in finding it back given its derivative, $- \frac{1 + 7 x^{2} \log{x} + \log{x} + (\log{x})^2 + 3 x^{2} + 6 x^{2} (\log{x})^2 + (\log{x})^3 + 2 x^{4}}{4 x^{4} \log{x} + x^{2} (\log{x})^2 + 4 x^{6}}$. The only method I named above that would come even close to being applicable to this integrand is partial fractions. This requires multivariate partial fraction decomposition (with respect to $x$ and $\log{x}$), and gives $-{\frac {2\,{x}^{2}+1+\log{x} }{{x}^{2}}}+{\frac {-1+8\,{x}^{4}-16\,{x}^{6}-{x}^{2}}{ \left( \log{x} +2\,{x}^{2} \right) ^{2}{x}^{2}}}+{\frac {-3\,{x}^{2}+12\,{x}^{4}-1}{ \left( \log{x} +2\,{x}^{2} \right) {x}^{2}}}$, which brings us no closer to a solution.

The reason that I started with a function and then computed its derivative was to show how easy it is to come up with a very complicated function that has an elementary anti-derivative. Therefore, we see that the methods from calculus are not the ones to use if we want an integration algorithm that is complete. The Risch Integration Algorithm is based on a completely different approach. At its core lies Liouville’s Theorem, which gives us the form of any elementary anti-derivative. (I wish to point out at this point that heuristics like this are still useful in a computer algebra system such as SymPy as fast preprocessors to the full integration algorithm).

The Risch Algorithm works by doing polynomial manipulations on the integrand, which is entirely deterministic (non-heuristic), and gives us the power of all the theorems of algebra, allowing us to actually prove that anti-derivatives cannot exist when they don’t. To start off, we have to look at derivations. As I said, everything with the Risch Algorithm is looked at algebraically (as opposed to analytically). The first thing to look at is the derivative itself. We define a derivation as any function $D$ on a ring $R$ that satisfies two properties:

1. $D(a + b) = Da + Db$ (Sum Rule),
2. $D(ab) = aDb + bDa$ (Product Rule)

for any $a, b \in R$. Furthermore, define the set of constant elements as $Const_D(R) = \{a \in R\textrm{ such that }Da = 0\}$. From just these two rules, you can prove all the rules from calculus such as the power rule and the quotient rule. Defining things algebraically lets us avoid analytic problems, such as discontinuities and the need to prove convergence all the time. Another problem from analysis is the multivalue nature of certain functions, namely the natural logarithm. We get around this by defining $\log{a}$ as the unique function satisfying $D\log{a} = \frac{Da}{a}$, for $a \neq 0$. From this definition we can prove the famous logarithmic identities $\log{ab} = \log{a} + \log{b}$ and $\log{a^n} = n\log{a}$ for logarithmic derivatives, again using only the two rules for a derivation given above. For example, $D\log{ab}=\frac{Dab}{ab}=\frac{aDb + bDa}{ab} = \frac{bDa}{ab} + \frac{aDb}{ab} =$ $\frac{Da}{a} + \frac{Db}{b}=D\log{a} + D\log{b}=D(\log{a} + \log{b})$.

The above definition for the natural logarithm gives the first insight into how the integration algorithm works. We define transcendental functions in terms of their derivatives. So if $t = e^x$, then $Dt/t = 1$. We can define all of the trigonometric functions in terms of $e^x$ and $\log{x}$ if we use $\sqrt{-1}$, but we can also avoid this. For example, if $t = \tan{x}$, then $Dt = 1 + t^2$ because $\frac{d}{dx}\tan{x} = \sec^2{x} = 1 + \tan^2{x}$.

We say that $t\in K$ is a monomial over the field $k$ with respect to a derivation $D$ if it satisfies

1. $t$ is transcendental over $k$,
2. $D[t]\in k[t]$.

The first condition is necessary because the we are only going to deal with the trancenental version of the Risch Algorithm (the algebraic case is solved too, but the solution method is quite different, and I am not implementing it this summer). The second condition just says that the derivative of t is a polynomial in t and a rational function in x. The functions I mentioned above all satisfy these properties for $K = \mathbb{Q}$. Theorems in analysis show that $\log{x}$, $e^x$, and $\tan{x}$ are all transcendental over $\mathbb{Q}[x]$. This is actually the only use of analysis that we make in the integration algorithm. Also, we see that if $t_1=\log{x}$, $t_2=e^x$, and $t_3=\tan{x}$, then $Dt_1=\frac{1}{x}$, $Dt_2=t_2$, and $Dt_3=1 + t_3^2$, which are all polynomials in their respective $t_i$ and rational functions in $x$. In the algorithm, $K$ is actually a tower of monomial extensions of $\mathbb{Q}$, so $t_n$ is a monomial over $\mathbb{Q}(x, t_1, \dots, t_{n-1})$. This allows us to work with functions like $e^{\tan{x}}$. We can’t make $t=e^{\tan{x}}$ directly because $\frac{d}{dx}e^{\tan{x}} = (1 + \tan^2{x})e^{\tan{x}}$ is not a polynomial in $t$ (it also contains $\tan{x}$) . But if we let $t_1$ be such that $Dt_1=1 + t_1^2$, i.e., $t_1=\tan{x}$, then we can let $t_2$ be such that $Dt_2=(1 + t_1^2)t_2$, i.e., $t_2=e^{\tan{x}}$. Remember that the $t_i$ are all “functions” of x, but there is no need to write $t=t(x)$ as long as we remember that $Dt\neq 0$, i.e., $t\not \in Const_D(K)$. This is another advantage of using algebraic over analytic methods; it allows us to reduce an integral down to a rational function in the “symbols” $x$ and $t_1, t_2, \dots, t_n$. By convention, we make the first extension $t_0$ such that $Dt_0=1$, i.e., $t_0=x$. I will just call it $x$ here instead of $t_0$, to avoid confusion.

This is the preparsing that I alluded to in an earlier post that I have not implemented yet. The reason that I haven’t implemented it yet is not just because I haven’t gotten around to it. We have to be careful when we build up the extension that each element is indeed transcendental over the already built-up field $k$. For example, although it appears transcendental, the function $e^{\frac{1}{2}\log{(1 + x^2)}}$ is really algebraic because it equals $\sqrt{1 + x^2}$. There are additional requirements, such that each extension is not the derivative of logarithmic derivative of an element of $k$ (see also the example I gave in the previous post). This is the part that I was talking about in my previous post that is not written out as much as the other algorithms in Bronstein’s book. So this is algorithmically solved, just like the rest of the Algorithm, but it is non-trivial and may end up being the hardest part of the algorithm for me to implement, just because it will probably require the most figuring out on my part.

So we can see that we can convert a transcendental integral, such as the one above, into a rational function in x and monomial extensions $t_1, t_2, \dots, t_n$. For example, the above integrand would become $- \frac{1 + t + t^{2} + 3 x^{2} + 6 t^{2} x^{2} + 7 t x^{2} + t^{3} + 2 x^{4}}{t^{2} x^{2} + 4 t x^{4} + 4 x^{6}}$. We then perform certain polynomial manipulations on this integrand, using the fact that $Dx=1$ and $Dt=\frac{1}{x}$. For the transcendental case of the Risch Algorithm, this is similar to the rational function integration that I outlined in this post, and has Liouville’s Theorem at its core. This is where I will start off next time.

### 5 Responses to The Risch Algorithm: Part 1

1. Integration of exponential functions « Aaron Meurer's SymPy Blog says:

[…] driving home. But that’s not my excuse. I was on vacation the week before, when I wrote up my lengthy blog post on the Risch Algorithm. My excuse is that I wanted to finish up my integrate_hyperexponential() function before I posted, […]

2. The Risch Algorithm: Part 2, Elementary Functions « Aaron Meurer's SymPy Blog says:

[…] Risch Algorithm: Part 2, Elementary Functions In Part 1 of this series of blog posts, I gave what I believed to be the prerequisites to understanding the […]

3. The Risch Algorithm: Part 3, Liouville’s Theorem « Aaron Meurer's SymPy Blog says:

[…] I would produce another one of my Risch Algorithm blog posts. It is recommended that you read parts 1 and 2 first, as well as my post on rational function integration, which could be considered part […]

4. 2010 in review « Aaron Meurer's SymPy Blog says:

[…] The busiest day of the year was July 4th with 103 views. The most popular post that day was The Risch Algorithm: Part 1. […]

5. Update for the Beginning of the Summer « Aaron Meurer's SymPy Blog says:

[…] that week. Others will be continuations of my Risch Algorithm series of blog posts (see parts 0, 1, 2, and […]