You are on page 1of 3

An As-Short-As-Possible Introduction to the Least Squares, Weighted Least

Squares and Moving Least Squares Methods for Scattered Data


Approximation and Interpolation
Andrew Nealen
Discrete Geometric Modeling Group
TU Darmstadt

Abstract from which we can compute c

In this introduction to the Least Squares (LS), Weighted Least


∂ JLS /∂ c1 = 0 : ∑ 2b1 (xi )[b(xi )T c − fi ] = 0
i
Squares (WLS) and Moving Least Squares (MLS) methods, we
briefly describe and derive the linear systems of equations for the ∂ JLS /∂ c2 = 0 : ∑ 2b2 (xi )[b(xi )T c − fi ] = 0
i
global least squares, and the weighted, local least squares approxi-
mation of function values from scattered data. By scattered data we ..
.
mean an arbitrary set of points in Rd which carry scalar quantities
(i.e. a scalar field in d dimensional parameter space). In contrast ∂ JLS /∂ ck = 0 : ∑ 2bk (xi )[b(xi )T c − fi ] = 0.
to the global nature of the least-squares fit, the weighted, local ap- i
proximation is computed either at discrete points, or continuously In matrix-vector notation, this can be written as
over the parameter domain, resulting in the global WLS or MLS
approximation respectively. ∑ 2b(xi )[b(xi )T c − fi ] =
i
Keywords: Data Approximation, Least Squares (LS), Weighted 2 ∑[b(xi )b(xi )T c − b(xi ) fi ] = 0.
Least Squares (WLS), Moving Least Squares (MLS), Linear Sys- i
tem of Equations, Polynomial Basis
Dividing by the constant and rearranging yields the following LSE

∑ b(xi )b(xi )T c = ∑ b(xi ) fi , (3)


i i
1 LS Approximation
which is solved as

Problem Formulation. Given N points located at positions xi in c = [∑ b(xi )b(xi )T ] 1
Rd where i ∈ [1 . . . N]. We wish to obtain a globally defined function
∑ b(xi ) fi . (4)
i i
f (x) that approximates the given scalar values fi at points xi in the
least-squares sense with the error functional JLS = ∑i k f (xi ) − fi k2 . If the square matrix ALS = ∑i b(xi )b(xi )T is nonsingular (i.e.
Thus, we pose the following minimization problem det(ALS ) 6= 0), substituting Eqn. (4) into Eqn. (2) provides the
fit function f (x). For small k (k < 5), the matrix inversion
in Eqn. (4) can be carried out explicitly, otherwise numerical
min ∑ k f (xi ) − fi k2 ,
f ∈∏dm i
(1) methods are the preferred tool, see [Press et al. 1992] 1 . In our
applications, we often use the Template Numerical Toolkit (TNT) 2 .

where f is taken from ∏dm , the space of polynomials of total degree Example. Say our data points live in R2 and we wish to fit a
m in d spatial dimensions, and can be written as quadratic, bivariate polynomial, i.e. d = 2, m = 2 and therefore
b(x) = [1, x, y, x2 , xy, y2 ]T (see above), then the resulting LSE looks
like this
f (x) = b(x)T c = b(x) · c, (2)
xi2 y2i c1 
 
1 xi yi xi yi 1
 
 xi x 2 x y x 3 x 2 y x y 2 x
i i i i c
where b(x) = [b1 (x), . . . , bk (x)]T is the polynomial basis vector  yi 

 i i i i   2
   
 yi xi yi y2i xi2 yi xi y2i y3i  c3   i 
and c = [c1 , . . . , ck ]T is the vector of unknown coefficients, which ∑  x2 x3 x2 yi x4 x3 yi x2 y2  c4  = ∑  xi2  fi .
 
we wish to minimize in (1). Here some examples for polynomial i  i i i i i i i i 
xi yi x2 yi xi y2 x3 yi x2 y2 xi y3  c5  xi yi 
bases: (a) for m = 2 and d = 2, b(x) = [1, x, y, x2 , xy, y2 ]T , (b) for i i i i i i
2
yi xi yi2 yi 3 2 2
xi yi xi yi 3 yi 4 c 6 y2i
a linear fit in R3 (m = 1, d = 3), b(x) = [1, x, y, z]T , and (c) for
fitting a constant in arbitrary dimensions, b(x) = [1]. In general, Consider the set of nine 2D points Pi ={(1,1), (1,-1), (-1,1), (-1,-1),
the number k of elements in b(x) (and therefore in c) is given by (0,0), (1,0), (-1,0), (0,1), (0,-1)} with two sets of associated func-
(d+m)!
k = m!d! , see [Levin 1998; Fries and Matthies 2003]. tion values fi1 ={1.0, -0.5, 1.0, 1.0, -1.0, 0.0, 0.0, 0.0, 0.0} and
fi2 ={1.0, -1.0, 0.0, 0.0, 1.0, 0.0, -1.0, -1.0, 1.0}. Figure 1 shows
Solution. We can minimize (1) by setting the partial derivatives the fit functions for the scalar fields fi1 and fi2 .
of the error functional JLS to zero, i.e. ∇JLS = 0 where ∇ =
[∂ /∂ c1 , . . . , ∂ /∂ ck ]T , which is a necessary condition for a mini- 1 at the time of writing this report, [Press et al. 1992] was available online

mum. By taking partial derivatives with respect to the unknown co- in pdf format through http://www.nr.com/
efficients c1 , . . . , ck , we obtain a linear system of equations (LSE) 2 http://math.nist.gov/tnt/
Weighting Function. Many choices for the weighting function θ
have been proposed in the literature, such as a Gaussian
1 1 d2
0.5 1 0.5 1 θ (d) = e− h2 , (9)
0 0
-0.5 0.5 -0.5 0.5
-1
-1 0
-1
-1 0
where h is a spacing parameter which can be used to smooth out
-0.5 y -0.5 y small features in the data, see [Levin 2003; Alexa et al. 2003].
0 -0.5 0 -0.5
x 0.5 x 0.5 Another popular weighting function with compact support is the
1 -1 1 -1 Wendland function [Wendland 1995]
θ (d) = (1 − d/h)4 (4d/h + 1). (10)
This function is well defined on the interval d ∈ [0, h] and further-
1 1
0.5 1 0.5 1
more, θ (0) = 1, θ (h) = 0, θ 0 (h) = 0 and θ 00 (h) = 0 (C2 continuity).
0 0
-0.5 0.5 -0.5 0.5
Several authors suggest using weighting functions of the form
-1 -1
-1 0 -1 0
-0.5 y -0.5 y 1
0 -0.5 0 -0.5 θ (d) = . (11)
x 0.5 x 0.5
d2 + ε 2
1 -1 1 -1
Note that setting the parameter ε to zero results in a singularity at
Figure 1: Fitting bivariate, quadratic polynomials to 2D scalar d = 0, which forces the MLS fit function to interpolate the data, as
fields: the top row shows the two sets of nine data points (see text), we will see later.
the bottom row shows the least squares fit function. The coefficient
vectors [c1 , . . . , c6 ]T are [−0.834, −0.25, 0.75, 0.25, 0.375, 0.75]T Solution. Analogous to Section 1, we take partial derivatives of the
(left column) and [0.334, 0.167, 0.0, −0.5, 0.5, 0.0]T . error functional JW LS with respect to the unknown coefficients c(x)

∑ θ (di ) 2b(xi )[b(xi )T c(x) − fi ] =


i
Method of Normal Equations. For a different but also very com- 2 ∑[θ (di )b(xi )b(xi )T c(x) − θ (di )b(xi ) fi ] = 0,
mon notation, note that the solution for c in Eqn. (3) solves the i
following (generally over-constrained) LSE (Bc = f) in the least-
squares sense where di = kx − xi k. We divide by the constant and rearrange to
obtain
bT (x1 )
   
f1
 .   .  ∑ θ (di )b(xi )b(xi )T c(x) = ∑ θ (di )b(xi ) fi , (12)
 ..  c =  ..  , (5) i i
bT (xN ) fN and solve for the coefficients

1
using the method of normal equations c(x) = [∑ θ (di )b(xi )b(xi )T ] ∑ θ (di )b(xi ) fi . (13)
i i
BT Bc = BT f Obviously, the only difference between Eqns. (4) and (13) are
c = (BT B)−1 BT f. (6) the weighting terms. Note again though, that whereas the coef-
ficients c in Eqn. (4) are global, the coefficients c(x) are local
Please verify that Eqns. (4) and (6) are identical. and need to be recomputed for every x. If the square matrix
AW LS = ∑i θ (di )b(xi )b(xi )T (often termed the Moment Matrix)
is nonsingular (i.e. det(AW LS ) 6= 0), substituting Eqn. (13) into
2 WLS Approximation Eqn. (8) provides the fit function fx (x).

Problem Formulation. In the weighted least squares formulation, Global Approximation using a Partition of Unity (PU). By fit-
we use the error functional JW LS = ∑i θ (kx − xi k) k f (xi ) − fi k2 for ting polynomials at j ∈ [1 . . . n] discrete, fixed points x j in the pa-
a fixed point x ∈ Rd , which we minimize rameter domain Ω, we can assemble a global approximation to our
data by ensuring that every point in Ω is covered by at least one
min
f ∈∏dm i
∑ θ (kx − xi k) k f (xi ) − fi k2 , (7) approximating polynomial, i.e. the support of the weight functions
θ j centered at the points x j covers Ω
[
similar to (1), only that now the error is weighted by θ (d) where Ω= supp(θ j ).
di are the Euclidian distances between x and the positions of data j
points xi .
The unknown coefficients we wish to obtain from the solution Proper weighting of these approximations can be achieved by con-
to (7) are weighted by distance to x and therefore a function of x. structing a Partition of Unity (PU) from the θ j [Shepard 1968]
Thus, the local, weighted least squares approximation in x is written
θ j (x)
as ϕ j (x) = n θ (x) , (14)
∑k=1 k
fx (x) = b(x)T c(x) = b(x) · c(x), (8)
where ∑ j ϕ j (x) ≡ 1 everywhere in Ω. The global approximation
and only defined locally within a distance R around x, i.e. then becomes
kx − xk < R. f (x) = ∑ ϕ j (x) b(x)T c(x j ). (15)
j
A Numerical Issue. To avoid numerical instabilities due to possi- References
bly large numbers in AW LS it can be beneficial to perform the fitting
procedure in a local coordinate system relative to x, i.e. to shift x A LEXA , M., B EHR , J., C OHEN -O R , D., F LEISHMAN , S., L EVIN , D.,
AND T. S ILVA , C. 2003. Computing and rendering point set surfaces.
into the origin. We therefore rewrite the local fit function in x as IEEE Transactions on Visualization and Computer Graphics 9, 1, 3–15.
A MENTA , N., AND K IL , Y. 2004. Defining point-set surfaces. In Proceed-
fx (x) = b(x − x)T c(x) = b(x − x) · c(x), (16) gins of ACM SIGGRAPH 2004.
the associated coefficients as B ELYTSCHKO , T., K RONGAUZ , Y., O RGAN , D., F LEMING , M., AND
K RYSL , P. 1996. Meshless methods: An overview and recent devel-

1
c(x) = [∑ θ (di )b(xi − x)b(xi − x)T ] ∑ θ (di )b(xi − x) fi , (17) opments. Computer Methods in Applied Mechanics and Engineering
i i
139, 3, 3–47.
F RIES , T.-P., AND M ATTHIES , H. G. 2003. Classification and overview
and the global approximation as of meshfree methods. Tech. rep., TU Brunswick, Germany Nr. 2003-03.
L ANCASTER , P., AND S ALKAUSKAS , K. 1981. Surfaces generated by
f (x) = ∑ ϕ j (x) b(x − x j )T c(x j ). (18) moving least squares methods. Mathematics of Computation 87, 141–
j 158.
L EVIN , D. 1998. The approximation power of moving least-squares. Math.
Comp. 67, 224, 1517–1531.
3 MLS Approximation and Interpolation L EVIN , D., 2003. Mesh-independent surface interpolation, to appear in ’ge-
ometric modeling for scientific visualization’ edited by brunnett, hamann
Method. The MLS method was proposed by Lancaster and Salka- and mueller, springer-verlag.
uskas [Lancaster and Salkauskas 1981] for smoothing and interpo- M ÜLLER , M., K EISER , R., N EALEN , A., PAULY, M., G ROSS , M., AND
lating data. The idea is to start with a weighted least squares for- A LEXA , M. 2004. Point based animation of elastic, plastic and melt-
mulation for an arbitrary fixed point in Rd , see Section 2, and then ing objects. In Proceedings of 2004 ACM SIGGRAPH Symposium on
move this point over the entire parameter domain, where a weighted Computer Animation.
least squares fit is computed and evaluated for each point individu- O HTAKE , Y., B ELYAEV, A., A LEXA , M., T URK , G., AND S EIDEL , H.-P.
ally. It can be shown that the global function f (x), obtained from a 2003. Multi-level partition of unity implicits. ACM Trans. Graph. 22, 3,
463–470.
set of local functions
P RESS , W., T EUKOLSKY, S., V ETTERLING , W., AND F LANNERY, B.
1992. Numerical Recipes in C - The Art of Scientific Computing, 2nd ed.
f (x) = fx (x), min ∑ θ (kx − xi k) k fx (xi ) − fi k2
fx ∈∏dm i
(19)
Cambridge University Press.
S HEN , C., O’B RIEN , J. F., AND S HEWCHUK , J. R. 2004. Interpolating
is continuously differentiable if and only if the weighting function and approximating implicit surfaces from polygon soup. In Proceedings
is continuously differentiable, see Levins work [Levin 1998; Levin of ACM SIGGRAPH 2004, ACM Press.
2003]. S HEPARD , D. 1968. A two-dimensional function for irregularly spaced
So instead of constructing the global approximation using data. In Proc. ACM Nat. Conf., 517–524.
Eqn. (15), we use Eqns. (8) and (13) (or (16) and (17)) and con- W ENDLAND , H. 1995. Piecewise polynomial, positive definite and com-
struct and evaluate a local polynomial fit continuously over the en- pactly supported radial basis functions of minimal degree. Advances in
Computational Mathematics 4, 389–396.
tire domain Ω, resulting in the MLS fit function. As previously
hinted at, using (11) as the weighting function with a very small ε
assigns weights close to infinity near the input data points, forcing
the MLS fit function to interpolate the prescribed function values in
these points. Therefore, by varying ε we can directly influence the
approximatimg/interpolating nature of the MLS fit function.

4 Applications
Least Squares, Weighted Least Squares and Moving Least Squares,
have become widespread and very powerful tools in Computer
Graphics. They have been successfully applied to surface recon-
struction from points [Alexa et al. 2003] and other point set surface
definitions [Amenta and Kil 2004], interpolating and approximating
implicit surfaces [Shen et al. 2004], simulating [Belytschko et al.
1996] and animating [Müller et al. 2004] elastoplastic materials,
Partition of Unity implicits [Ohtake et al. 2003], and many other
research areas.
In [Alexa et al. 2003] a point-set, possibly acquired from a 3D
scanning device and therefore noisy, is replaced by a representa-
tion point set derived from the MLS surface defined by the input
point-set. This is achieved by down-sampling (i.e. iteratively re-
moving points which have little contribution to the shape of the
surface) or up-sampling (i.e. adding points and projecting them to
the MLS surface where point-density is low). The projection proce-
dure has recently been augmented and further analyzed in the work
of Amenta and Kil [Amenta and Kil 2004]. Shen et. al [Shen et al. Figure 2: The MLS surface of a point-set with varying density (the
2004] use an MLS formulation to derive implicit functions from density is reduced along the vertical axis from top to bottom). The
polygon soup. Instead of solely using value constraints at points surface is obtained by applying the projection operation described
(as shown in this report) they also add value constraints integrated by Alexa et. al. [2003]. Image courtesy of Marc Alexa.
over polygons and normal constraints.

You might also like