# Fast Polynomial Interpolation

## Introduction

Polynomial interpolation is the process of taking a set of known points and deriving a polynomial that fits those data points exactly. Several methods are commonly used for this purpose. The most common one involves writing down a general form for the polynomial, substituting the data points into it one at a time, and solving the resulting system of equations for the coefficients. Lagrange interpolation is a more clever form that expresses the polynomial as a sum of other polynomials, each of which has a value of zero for all but one of the data points. Here, I will present another method, mathematically equivalent to the Newton form, but I will present it from a standpoint of writing a computer program to implement it and will also extend its scope somewhat.

## The Method

Suppose that we have n points: (x_{1},y_{1}), ...,
(x_{n},y_{n}). We can express the set of all polynomials that
contain the point (x_{n},y_{n}) as

**(1)** y = y_{n} + (x - x_{n})f(x),

where f is some other polynomial. Now, if we can determine a relationship between the points of f and the remaining data points, we can repeat this process until there aren't any points left.

We know that our final result must go through the point
(x_{i},y_{i}) for each i, so we can substitute this into (1) as
follows:

**(2)** y_{i} = y_{n} + (x_{i} -
x_{n})f(x_{i}),

**(3)** f(x_{i}) = (y_{i} - y_{n}) /
(x_{i} - x_{n}).

Thus, if we make the transformation described in (3) to all the other data
points, we may repeat the process and eventually calculate the desired
polynomial. However, this process lends itself easily to other types of data, as
well, such as derivatives at points where we already know the value. Suppose
that for each x_{i}, we know y_{i}, y'_{i}, ...,
y^{(ki)}_{i}. Now, take the kth derivative of (1) to
find the corresponding values for f:

**(4)** y^{(k)} = (x - x_{n})f^{(k)}(x)
+ k f^{(k-1)}(x).

For the derivatives at x_{n}, we have:

**(5)** y^{(k)}_{n} = k
f^{(k-1)}(x_{n}),

**(6)** f^{(k-1)}(x_{n}) =
y^{(k)}_{n} / k.

For the derivatives at other data points, we have:

**(7)** y^{(k)}_{i} = (x_{i} -
x_{n})f^{(k)}(x_{i}) + k
f^{(k-1)}(x_{i}),

**(8)** f^{(k)}(x_{i}) =
(y^{(k)}_{i} - k f^{(k-1)}(x_{i})) /
(x_{i} - x_{n}).

Note that the form of (8) is O.K., since we will be able to calculate
f^{(k-1)}(x_{i}) first. Now, using equations (1), (3), (6), and
(8), we have ourselves an algorithm.

## The Program

Now, this isn't a course in programming. It's an explanation of the method. As such, you should now be able to write your own program. However, since I had to do testing, anyway, I have written one myself. It is written in perl, and the source is provided below.

```
#!/usr/bin/perl
use strict;
use warnings;
my @points = ([0,1,4], [3,4,6,4]);
my @coeffs = ();
my @tmp = (1);
while (@points) {
my $point = pop @points;
unshift @tmp, 0;
push @coeffs, 0;
foreach my $i (0 .. @tmp-2) {
$coeffs[$i] += $point->[1] * $tmp[$i+1];
$tmp[$i] -= $point->[0] * $tmp[$i+1];
}
foreach my $p (@points) {
$p->[1] = ($p->[1] - $point->[1]) / ($p->[0] - $point->[0]);
$p->[$_] = ($p->[$_] - ($_-1) * $p->[$_-1]) / ($p->[0] - $point->[0]) foreach (2 .. @$p - 1);
}
my $new_point = [$point->[0]];
$new_point->[$_] = $point->[$_+1] / $_ foreach (1 .. @$point - 2);
push @points, $new_point if (@$new_point > 1);
}
$coeffs[$_] = "$coeffs[$_]x^$_" foreach (0 .. $#coeffs);
print join(' + ', reverse @coeffs), "\n";
```

Since this was just a proof of concept, I didn't put in any method to enter
data points. You will have to edit the @points variable in order to specify
them. The syntax for each point is [x_{i}, y_{i},
y'_{i}, ...].