In Part 1 of this series of blog posts, I gave what I believed to be the prerequisites to understanding the mathematics behind the Risch Algorithm (aside from a basic understanding of derivatives and integrals from calculus). In this post, I will elaborate on what is meant by “elementary function,” a term that is thrown around a lot when talking about Risch integration.

The usual definition of elementary function given in calculus is any function that is a constant, a polynomial, an exponential (, ), a logarithm (, ), one of the standard trig functions or their inverses (sin, cos, tan, arcsin, arccos, arctan, etc.), and any combination of these functions via addition, subtraction, multiplication, division, taking powers, and composition. Thus, even a function as crazy as is elementary, by this definition.

But for the rigorous definition of an elementary function, we must take into consideration what field we are working over. Before I get into that, I need some definitions. Suppose that is the field we are working over. You can imagine that , the field of rational functions in x with rational number coefficients. As with the previous post, imagine as a function, for example, . Let be a differential extension of . We have not defined this, but it basically means that our derivation works the same in as it does in . You can imagine here that .

We say that is a **primitive** over if . In other words, the derivative of is does not contain , only elements of . Obviously, by the definition of a derivation (see the last post in the series), any element of is a primitive over , because the derivative of any element of a field is again an element of that field (you can see this by the definition of a derivation, also given in the last post). But also if for some , then is a primitive over , because .

We say that is a **hyperexponential** over if . Written another way, for some . We know from calculus that the functions that satisfy differential equations of the type are exactly the exponential functions, i.e., .

The last class of functions that needs to be considered is **algebraic functions**. I will not go into depth on algebraic functions, because my work this summer is only on integrating purely transcendental functions. Therefore, the only concern we shall have with algebraic functions in relation to the integration algorithm is to make sure that whatever function we are integrating is *not* algebraic, because the transcendental algorithms will not be valid if they are. Hopefully in a future post I will be able to discuss the Risch Structure Theorems, which give necessary and sufficient conditions for determing if a Liouvillian function (see next paragraph) is algebraic.

Now, we say that a function is **Liouvillian** over if is algebraic, a primitive, or a hyperexponential over . For to be a **Liouvillian monomial** over , we have the additional condition that . This just means that we cannot consider something like over as a Liouvillian monomial. Otherwise (I believe) we could run into undecidability problems.

We call a **logarithm** over if for some , i.e., . We call an **exponential** over if (or ) for some , i.e., . Note the difference between an *exponential* monomial and a *hyperexponential* monomial.

We can finally give the rigorous definition of an elementary extension. is an **elementary extension** of if there are such that and is elementary over for all . An **elementary function ** is any element of an elementary extension of with the derivation . A function has an **elementary integral** over if there exists an elementary extension of and such that , i.e., .

Usually, we start with , the field of rational functions in x with rational number coefficients. We then build up an elementary extension one function at a time, with each function either being a logarithm or exponential of what we have already built up, or algebraic over it. As I noted above, we will ignore algebraic functions here. We generally start with because it is computable (important problems such as the zero equivalence problem or the problem of determining certain field isomorphisms are decidable), but the above definition lets us start with any subfield of .

Now you may be wondering: we’ve covered algebraic functions, exponentials and logarithms, and obviously rational functions are elements of , but what about trigonometric functions? Well, from a theoretical stand point, we can make our lives easier by noticing that all the common trigonometric functions can be represented as exponentials and logarithms over . For example, . You can see here that all the common trig functions can be represented as complex exponentials or logarithms like this. However, from an algorithmic standpoint, we don’t want do convert all trig expressions into complex exponentials and logarithms in order to integrate them. For one thing, our final result will be in terms of complex exponentials and logarithms, not the original functions we started with, and converting them back may or may not be an easy thing to do. Also, aside from the fact that we have different functions than we were expecting, we also will end up with an answer containing , even if our original integrand did not.

Fortunately, the integrating tangents directly is a solved problem, just like integrating algebraic, exponential, or logarithmic functions is solved. We can’t integrate functions like or directly as monomials like we can with or , because the derivatives of sin and cos are not polynomials in their respective selves with coefficients in . However, we can use a trick or two to integrate them. One way is to rewrite and proceed to integrate it as a tangent. Another alternative is to write . This function is algebraic over , but if we do not already have in our differential extension, it is transcendental, and we can rewrite it as (this is used in Bronstein’s text, so I believe what I just said is correct, though I haven’t verified it with the structure theorems just yet). These both work using the relevant identities for sin too. Of course, there is still the problem of rewriting the final integrand back in terms of sin or cos. Otherwise, you will get something like instead of for . Bronstein doesn’t elaborate on this too much in his book, so it is something that I will have to figure out on my own.

The second option I gave above leads nicely into the main point I wanted to make here about elementary functions. Notice that everywhere in the definitions above, things depend on the field we are working in. Therefore, cannot be an elementary extension over , but it can be over . Also, the error function, defined as cannot be an elementary extension over , but it can over . In fact this is how we can integrate in terms of some special functions, including the error function: by manually adding (or whatever) to our differential extension. Therefore, the usual definition of an elementary anti-derivaitve and the above Risch Algorithm definition of an elementary integral coincide only when the extension consists only of elementary functions of the form of the usual definition (note that above, our final fields are and , respectively).

Originally, I was also going to talk about Liouville’s Theorem in this blog post, but I think it has already gotten long enough (read “I’m getting tired”), so I’ll put that off until next time.

[…] time, I can refer you to my previous blog post for definitions. The chief thing here is that there is not a function in my integration3 branch […]

[…] produce another one of my Risch Algorithm blog posts. It is recommended that you read parts 1 and 2 first, as well as my post on rational function integration, which could be considered part […]

Hey, awesome series on integration. It’s amazing how few people know about how CAS’s implement their algorithms, and you’re amazing for implementing them.

Anyway, the tan(t/2) trick is well known (Weierstrass substitution) and can convert any nasty combination of trig functions into a rational function. Converting back is a pattern matching exercise.

Hey.

I never knew that it was called Weierstrass substitution. Actually, you just have to do

`result.subs(tan(x/2), sin(x)/(1 + cos(x)))`

(or`(1 - cos(x))/sin(x)`

if you like), and then it’s just a matter of doing simplification on the result (SymPy’s`trigsimp()`

is going to have to get a little smarter whenever I implement this, actually).[…] that week. Others will be continuations of my Risch Algorithm series of blog posts (see parts 0, 1, 2, and […]