This glossary provides brief definitions of the basic mathematical terms and notation used in this guide to splines. The terms are not in alphabetical order. Terms and definitions are presented in order such that the explanation of each term only uses terms discussed earlier. In this way, the first time, you can choose to read the entire glossary from start to finish, for a cohesive introduction to these terms.
Because MATLAB® uses the notation [a,b]
to
indicate a matrix with the two columns, a
and b
,
this guide uses the notation [a
.. b
] to indicate the closed interval
with endpoints a
and b
. This
guide does the same for open and half-open intervals. For example,
[a
.. b
)
denotes the interval that includes its left endpoint, a
,
and excludes its right endpoint, b
.
A d-vector is a list of d real numbers, i.e., a point in ℜd. In MATLAB, a d-vector is stored as a matrix of size [1,d], i.e., as a row-vector, or as a matrix of size [d,1], i.e., as a column-vector. In the Curve Fitting Toolbox™ spline functions, vectors are column vectors.
In this toolbox, the term function is used in its mathematical sense, and so describes any rule that associates, to each element of a certain set called its domain, some element in a certain set called its target. Common examples in this toolbox are polynomials and splines. But even a point x in ℜd, i.e., a d-vector, may be thought of as a function, namely the function, with domain the set {1,...,d} and target the real numbers ℜ, that, for i = 1,...,d, associates to i the real number x(i).
The range of a function is the set of its values.
There are scalar-valued, vector-valued, matrix-valued, and ND-valued splines. Scalar-valued functions have the real numbers ℜ (or, more generally, the complex numbers) as their target, while d-vector-valued functions have ℜd as their target; if, more generally, d is a vector of positive integers, then d-valued functions have the d-dimensional real arrays as their target. Curve Fitting Toolbox spline functions can deal with univariate and multivariate functions. The former have some real interval, or, perhaps, all of ℜ as their domain, while m-variate functions have some subset, or perhaps all, of ℜm as their domain.
If f is a bivariate function, and y is some specific value of its second variable, then
is the univariate function whose value at x is f(x,y).
In this toolbox, the term function usually refers to a scalar-valued function. A vector-valued function is called here a:
curve if its domain is some interval
surface if its domain is some rectangle
To be sure, to a mathematician, a curve is not a vector-valued function on some interval but, rather, the range of such a (continuous) function, with the function itself being just one of infinitely many possible parameterizations of that curve.
A bivariate tensor product is any weighted sum of products of a function in the first variable with a function in the second variable, i.e., any function of the form
More generally, an m-variate tensor product is any weighted sum of products g1(x1)g2(x2)...gm(xm) of m univariate functions.
A univariate scalar-valued polynomial is specified by the list
of its polynomial coefficients. The length of that list is the order of that polynomial, and, in this toolbox, the list
is always stored as a row vector. Hence an m-list
of polynomials of order k is always stored as a
matrix of size [
m , k]
.
The coefficients in a list of polynomial coefficients are listed
from "highest" to "lowest", to conform to the MATLAB convention,
as in the command polyval(a,x)
. To recall: assuming
that x
is a scalar and that a
has k
entries,
this command returns the number
In other words, the command treats the list a
as
the coefficients in a power form. For reasons of numerical
stability, such a coefficient list is treated in this toolbox, more
generally, as the coefficients in a shifted, or, local power form, for some given center c
. This
means that the value of the polynomial at some point x
is
supplied by the command polyval(a,x-c)
.
A vector-valued polynomial is treated in exactly the same way,
except that now each polynomial coefficient is a vector, say a d-vector. Correspondingly, the coefficient
list now becomes a matrix of size [d,k]
.
Multivariate polynomials appear in this toolbox mainly as tensor products. Assuming first, for simplicity, that the polynomial in question is scalar-valued but m-variate, this means that its coefficient “list” a is an m-dimensional array, of size [k1,...,km] say, and its value at some m-vector x is, correspondingly, given by
for some "center" c.
A piecewise-polynomial function refers to a function put together from polynomial pieces. If the function is univariate, then, for some strictly increasing sequence ξ1 < ... < ξl + 1, and for i = 1:l, it agrees with some polynomial pi on the interval [ξi .. ξi + 1). Outside the interval [ξ1 .. ξl + 1), its value is given by its first, respectively its last, polynomial piece. The ξi are its breaks. All the multivariate piecewise-polynomials in this toolbox are tensor products of univariate ones.
In this toolbox, the term B-spline is used in its original meaning only, as given to it by its creator, I. J. Schoenberg, and further amplified in his basic 1966 article with Curry, and used in PGS and many other books on splines. According to Schoenberg, the B-spline with knots tj, ..., tj+k is given by the following somewhat obscure formula (see, e.g., IX(1) in PGS):
To be sure, this is only one of several reasonable normalizations of the B-spline, but it is the one used in this toolbox. It is chosen so that
But, instead of trying to understand the above formula for the
B-spline, look at the reference pages for the GUI bspligui
for
some of the basic properties of the B-spline, and use that GUI to
gain some firsthand experience with this intriguing function. Its
most important property for the purposes of this toolbox is also the
reason Schoenberg used the letter B in its name:
Every space of (univariate) piecewise-polynomials of a given order has a Basis consisting of B-splines (hence the “B” in B-spline).
Consider the set
of all (scalar-valued) piecewise-polynomials of order k with breaks ξ1 < ... < ξl + 1 that, for i = 2...l, may have a jump across ξi in its μith derivative but have no jump there in any lower order derivative. This set is a linear space, in the sense that any scalar multiple of a function in S is again in S, as is the sum of any two functions in S.
Accordingly, S contains a basis (in fact, infinitely many bases), that is, a sequence f1,...,fn so that every f in S can be written uniquely in the form
for suitable coefficients aj. The number n appearing here is the dimension of the linear space S. The coefficients aj are often referred to as the coordinates of f with respect to this basis.
In particular, according to the Curry-Schoenberg Theorem, our space S has a basis consisting of B-splines, namely the sequence of all B-splines of the form , j = 1...n, with the knot sequence t obtained from the break sequence ξ and the sequence µ by the following conditions:
Have both ξ1 and ξl + 1 occur in t exactly k times
For each i = 2:l, have ξi occur in t exactly k – µi times
Make sure the sequence is nondecreasing and only contains elements from ξ
Note the correspondence between the multiplicity of a knot and the smoothness of the spline across that knot. In particular, at a simple knot, that is a knot that appears exactly once in the knot sequence, only the (k – 1)st derivative may be discontinuous.
A rational spline is
any function of the form r(x)
= s(x)/w(x),
with both s and w splines and,
in particular, w a scalar-valued spline, while
s often is vector-valued. In this toolbox, there is the additional
requirement that both s and w be
of the same form and even of the same order, and with the same knot
or break sequence. This makes it possible to store the rational spline r as
the ordinary spline R whose value at x is
the vector [s(x);w(x)].
It is easy to obtain r from R.
For example, if v
is the value of R at x,
then v(1:end-1)/v(end)
is the value of r at x. As another example, consider getting derivatives of r from those of R.
Because s = wr, Leibniz' rule
tells us that
Hence, if v(:,j)
contains Dj–1R(x), j = 1...m
+ 1, then
provides the value of DmR(x).
A bivariate thin-plate spline is of the form
with φ(t) = tlogt a univariate function, and ∥y∥ denoting the Euclidean length of the vector y. The sites cj are called the centers, and the radially symmetric function ψ(x) := φ(|x|2) is called the basis function, of this particular stform.
Interpolation is the construction of a function f that matches given data values, yi, at given data sites, xi, in the sense that f(xi) = yi, all i.
The interpolant, f, is usually constructed as the unique function of the form
that matches the given data, with the functions fj chosen “appropriately”. Many considerations might enter that choice. One of these considerations is sure to be that one can match in this way arbitrary data. For example, polynomial interpolation is popular because, for arbitrary n data points (xi,yi) with distinct data sites, there is exactly one polynomial of order n – 1 that matches these data. Explicitly, choose the fj in the above “model” to be
which is an n – 1 degree polynomial for each j. fj(xi) = 0 for every i ≠ j, but fj(xj) ≠ 0 as long as the xi are all distinct. Set aj = yj/fj(xj) so that
f(xj) = fj(xj)aj = yj for all j.
In spline interpolation, one chooses the fj to be the n consecutive B-splines Bj(x) = B(x|tj,...,tj+k), j = 1:n, of order k for some knot sequence t1 ≤ t2 ≤ ... ≤ tn + k. For this choice, there is the following important theorem.
Let x1 < x2 < ... < xn. For arbitrary corresponding values yi, i = 1...n, there exists exactly one spline f of order k with knot sequence tj, j = 1...n+k, so that f(xi) = yi, i = 1...n if and only if the sites satisfy the Schoenberg-Whitney conditions of order k with respect to that knot sequence t, namely
ti ≤ xi ≤ ti+k, i = 1...n,
with equality allowed only if the knot in question has multiplicity k, i.e., appears k times in t. In that case, the spline being constructed may have a jump discontinuity across that knot, and it is its limit from the right or left at that knot that matches the value given there.
In least-squares approximation, the data may be matched only approximately. Specifically, the linear system
is solved in the least-squares sense. In this, some weighting is involved, i.e., the coefficients aj are determined so as to minimize the error measure
for certain nonnegative weights wi at the user's disposal, with the default being to have all these weights the same.
In spline smoothing, one also tries to make such an error measure small, but tries, at the same time, to keep the following roughness measure small,
with λ a nonnegative weight function that is usually just the constant function 1, and Dmf the mth derivative of f. The competing claims of small E(f) and small F(Dmf) are mediated by a smoothing parameter, for example, by minimizing
for some choice of ρ or of p
,
and over all f for which this
expression makes sense.
Remarkably, if the roughness weight λ is constant,
then the unique minimizer f is
a spline of order 2m, with knots only at the data
sites, and all the interior knots simple, and with its derivatives
of orders m,...,2m–2
equal to zero at the two extreme data sites, the so-called “natural”
end conditions. The larger the smoothing parameter ρ ≥ 0 or p
∊ [0..1] used,
the more closely f matches the given data, and
the larger is its mth derivative.
For data values yi at sites ci in the plane, one uses instead the error measure and roughness measure
and, correspondingly, the minimizer of the sum ρE(f) + F(D2f) is not a polynomial spline, but is a thin-plate spline.
Note that the unique minimizer of ρE(f) + F(D2f) for
given 0 < ρ < ∞ is
also the unique minimizer of p
E(f) + (1 – p
)F(D2f) for p
= ρ/(1 + ρ) ∊ (0 .. 1) and vice
versa.
Terms such as “a 2D problem” or “a 3D problem” are not used in this toolbox, because they are not well defined. For example a 2D problem could be any one of the following:
Points on some curve, where you must construct a spline curve, i.e., a vector-valued spline function of one variable.
Points on the graph of some function, where you must construct a scalar-valued spline function of one variable.
Data sites in the plane, where you must construct a bivariate scalar-valued spline function.
A “3D problem” is similarly ambiguous. It could involve a curve, a surface, a function of three variables, ... . Better to classify problems by the domain and target of the function(s) to be constructed.
Almost all the spline construction commands in this toolbox
can deal with ND-valued
data, meaning that the data values are ND-arrays. If d
is
the size of such an array, then the resulting spline is called d
-valued.