Monday, March 9, 2015

Horner's Rule for Polynomial Computation


Suppose I had a polynomial anxn + an-1xn-1 + … + a1x + a0, and a point x* at which I wanted to evaluate that polynomial.

Now our immediate inclination is to just substitute the point straight into the polynomial to get an(x*)n + an-1(x*)n-1 + … + a1(x*) + a0  and then work it through in the obvious way: we raise x* to the nth power and multiply by an, then we raise x* to the (n-1)th power and multiply by an-1, and so on, adding them all together at the end.

That seems like a lot of work though. We could be more efficient by working it the other way around: that is, starting with x* and caching the intermediate powers of x* as we work our way up. That would definitely cut down on some of the multiplications. But is there actually an even better way to do this polynomial evaluation?

As it turns out, there is – we can use Horner's Rule!

Not only is Horner's Rule more efficient than either of the above approaches, it is actually an optimal algorithm for polynomial computation: that is, we can prove mathematically that any other rule for polynomial computation can do no better than Horner's Rule.

So let's investigate how this rule works, and write up some Java code to test it out!

[We're back with the SICP exercise blog posts! This one is inspired by Exercise 2.34.]