% This file was created by the WP2LaTeX program version: 2.95
\documentclass[11pt]{article}
\usepackage{wp2latex}
\begin{document}
\baselineskip=2.50ex \nwln
\begin{center}
More Accurate Linear Interpolation For Table Lookup Algorithms
\end{center}
\nwln
\begin{center}
John Forkosh\\
285 Stegman Parkway \#309\\
Jersey City, NJ 07305\\
(201) 860$-$9677
\end{center}
\nwln
\begin{center}
Resume\\
$-$$-$$-$$-$$-$$-$$-$$-$
\end{center}
\hspace*{1.28cm}John Forkosh has an ms in physics, and for the past ten years has been an
independent consultant working primarily under VAX/VMS in "C". He developed the
algorithm discussed in this article while at the Goddard Institute for Space
Studies in New York City, working on the radiative transfer code for a general
circulation model of the earth's atmosphere (used to study climatic change,
particularly due to increased carbon dioxide resulting from burning fossil
fuels).
\nwln
\begin{center}
I. Introduction\\
$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$
\end{center}
\hspace*{1.28cm}Scientific applications often require the very frequent evaluation of
complicated functions. To reduce computational overhead, such a function can
be defined by a table of its values at various fixed points. Some form of
interpolation is then required to estimate values of the function at
intermediate points. Linear interpolation is the quickest procedure; however,
it is usually the least accurate.
\hspace*{1.28cm}A lookup table for a function f(x) is a sequence of numbers y$_{{\rm i}}$=f(x$_{{\rm i}}$),
i=1...n. Using linear interpolation, f(x) is approximated by straight line
segments connecting the y$_{{\rm i}}$'s. Consider the very simple function f(x)=x$^{{\rm ^{2}}{}}$.
Because this function is concave (its second derviative f"(x)=2 is always
positive) the straight line segments will always lie above the curve, i.e.,
linearly interpolated values will always be too large.
\newpage
\bigskip
\hspace*{1.28cm}To avoid such systematic errors, we suggest line segments connecting a
sequence of "pseudo$-$values" y$_{{\rm i}}$* that are determined by minimizing the sum of the
integrated square errors attributable to each segment. A least$-$squares
technique is used to determine the appropriate y$_{{\rm i}}$*'s. The result is that the
table does not contain exact values for the function f(x) at the points x$_{{\rm i}}$.
However, the mean square error due to sampling random points in the function's
domain is minimized. For instance, for f(x)=x$^{{\rm ^{2}}{}}$ between $-$10$\leq$x$\leq$10 with table
values at each integer, the mean square error is reduced by a factor of 6 (the
rms error by a factor of 2.45).
\hspace*{1.28cm}Our discussion is pitched at about the level of Numerical Recipes in C,
Press et.al., Cambridge University Press 1988. It illustrates not only
interpolation (Chapter 3), but also least squares (Chapter 14) and the solution
of linear algebraic equations (Chapter 2). The latter arises because, for a
table with n values, our analysis expresses the y$_{{\rm i}}$*'s as n simultaneous
equations in terms of the y$_{{\rm i}}$'s. Complete code is provided so that the reader
can apply the algorithm to any function f(x) whatsoever.
\nwln
\begin{center}
II. Functions of a Single Variable\\
$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$$-$
\end{center}
\hspace*{1.28cm}Interpolation techniques represent a function y=f(x) by a table of values
y$_{{\rm i}}$=f(x$_{{\rm i}}$), i=1...n. Between y$_{{\rm i}}$ and y$_{{\rm i+1}}$, standard linear interpolation
approximates f(x) by a line segment, which we'll denote by u$_{{\rm i}}$(x)=a$_{{\rm i}}$x+b$_{{\rm i}}$,
x$_{{\rm i}}$$\leq$x$\leq$x$_{{\rm i+1}}$, with a$_{{\rm i}}$ and b$_{{\rm i}}$ chosen so that u$_{{\rm i}}$(x$_{{\rm i}}$)=y$_{{\rm i}}$ and u$_{{\rm i}}$(x$_{{\rm i+1}}$)=y$_{{\rm i+1}}$. This
provides exact values of f(x) at the x$_{{\rm i}}$'s, but does not constrain the mean
square errors
\begin{equation}
\epsilon_{i} ={\frac{1}{x_{i+1} -x_{i}}} \int_{x_{i}}^{x_{i+1} } {(u_{i} (x)-f(x))^{2} dx}.
\end{equation}
In addition, it gives rise to systematic errors when the sign of f"(x) doesn't
change between points. If f"$>$0 (concave function) then u$-$f$>$0 for all x in the
interval (and similarly for a convex function). This situation is illustrated
in figure 1a.
\newpage
\bigskip
\hspace*{1.28cm}Figure 1b illustrates the procedure to be developed. In this case the line
segments u$_{{\rm i}}$(x) connect points u$_{{\rm i}}$(x$_{{\rm i}}$)=y$_{{\rm i}}$* and u$_{{\rm i}}$(x$_{{\rm i+1}}$)=y$_{{\rm i+1}}$*. The y$_{{\rm i}}$*'s are
determined by a least squares technique that minimizes the sum of the error
terms given in Eq.(1), so that
\begin{equation}
\epsilon =\sum _{i=1}^{n-1} \epsilon_{i} ;\hspace{0.32cm}{\frac{\partial \epsilon }{\partial y_{i} *}} =0,\hspace{0.05cm} i=1...n.
\end{equation}
This gives n equations for the n unknown y$_{{\rm i}}$*'s.
\hspace*{1.28cm}The work between here and Eq.(8) contains the algebra necessary to
formulate these n equations so that we can program their solution. The reader
who doesn't care to follow this derivation can skip to Eq.(8) which contains the
required result. If you do skip there, note that y$_{{\rm i}{\rm \pm}{}{\rm \frac{1}{2}}{}}$=f(x$\pm$$\frac{1}{2}$$\Delta$x) denotes the
middle of each $\Delta$x interval.
\hspace*{1.28cm}To keep the analysis tractable we assume that x$_{{\rm i+1}}$$-$x$_{{\rm i}}$=$\Delta$x is constant. Then
the line segments can be written
\begin{displaymath}
u_{i} (x) ={\frac{1}{\Delta x}} \left [ (x_{i+1} -x)y_{i} *\hspace{0.05cm} +\hspace{0.05cm} (x-x_{i} )y_{i+1} * \right ],\hspace{0.11cm} x_{i} \leq x\leq x_{i+1} ,
\end{displaymath}
and the sum of the error terms given by Eqs.(1) and (2) is
\begin{equation}
\epsilon = \sum_{i=1}^{n-1} {\frac{1}{\Delta x}} \int_{x_{i}}^{x_{i+1} } \left\{{\frac{1}{\Delta x}} \left [ (x_{i+1} -x)y_{i} * + (x-x_{i} )y_{i+1} * \right ] - f(x) \right\}^{2} dx
\end{equation}
\bigskip
\hspace*{1.28cm}At the interior points 1$<$i$<$n two terms from the sum in Eq.(3) contain
subscript i, so that
\begin{equation}
{\frac{\partial \epsilon }{\partial y_{i} *}} \hspace{0.11cm} =\hspace{0.11cm}{\frac{\partial \epsilon_{i-1} }{\partial y_{i} *}} \hspace{0.11cm} +\hspace{0.11cm}{\frac{\partial \epsilon _{i}} {\partial y_{i} *}} ,\hspace{0.11cm} 1