Numerical Algorithms
This opening chapter introduces the basic concepts of numerical algorithms and scientific comput-
ing.
We begin with a general, brief introduction to the field in Section 1.1. This is followed by the
more substantial Sections 1.2 and 1.3. Section 1.2 discusses the basic errors that may be encountered
when applying numerical algorithms. Section 1.3 is concerned with essential properties of such
algorithms and the appraisal of the results they produce.
We get to the “meat” of the material in later chapters.
1.1 Scientific computing
Scientific computing is a discipline concerned with the development and study of numerical al-
gorithms for solving mathematical problems that arise in various disciplines in science and engin-
eering.
Typically, the starting point is a given mathematical model which has been formulated in
an attempt to explain and understand an observed phenomenon in biology, chemistry, physics, eco-
nomics, or any other scientific or engineering discipline. We will concentrate on those mathematical
models which are continuous (or piecewise continuous) and are difficult or impossible to solve ana-
lytically; this is usually the case in practice. Relevant application areas within computer science and
related engineering fields include graphics, vision and motion analysis, image and signal processing,
search engines and data mining, machine learning, and hybrid and embedded systems.
In order to solve such a model approximately on a computer, the continuous or piecewise
continuous problem is approximated by a discrete one. Functions are approximated by finite arrays
of values. Algorithms are then sought which approximately solve the mathematical problem effi-
ciently, accurately, and reliably. This is the heart of scientific computing. Numerical analysis may
be viewed as the theory behind such algorithms.
The next step after devising suitable algorithms is their implementation. This leads to ques-
tions involving programming languages, data structures, computing architectures, etc. The big pic-
ture is depicted in Figure 1.1.
The set of requirements that good scientific computing algorithms must satisfy, which seems
elementary and obvious, may actually pose rather difficult and complex practical challenges. The
main purpose of this book is to equip you with basic methods and analysis tools for handling such
challenges as they arise in future endeavors.
2 Chapter 1. Numerical Algorithms
As a computing tool, we will be using MATLAB: this is an interactive computer language, which
for our purposes may best be viewed as a convenient problem solving environment.
MATLAB is much more than a language based on simple data arrays; it is truly a complete
environment. Its interactivity and graphics capabilities make it more suitable and convenient in our
context than general-purpose languages such as C++, Java, Scheme,orFortran 90. In fact,
many of the algorithms that we will learn are already implemented in MATLAB... So why learn
them at all? Because they provide the basis for much more complex tasks, not quite available (that
is to say, not already solved) in MATLAB or anywhere else, which you may encounter in the future.
Rather than producing yet another MATLAB tutorial or introduction in this text (there are
several very good ones available in other texts as well as on the Internet) we will demonstrate the
use of this language on examples as we go along.
Downloaded 01/27/18 to 132.174.254.159. Redistribution subject to SIAM license or copyright; se
1.2. Numerical algorithms and errors 3
1.2 Numerical algorithms and errors
The most fundamental feature of numerical computing is the inevitable presence of error. The result
of any interesting computation (and of many uninteresting ones) is typically only approximate, and
our goal is to ensure that the resulting error is tolerably small.
Relative and absolute errors
There are in general two basic types of measured error. Given a scalar quantity u and its approxima-tion v:
The relative error is usually a more meaningful measure. This is especially true for errors in floating
point representation, a point to which we return in Chapter 2. For example, we record absolute and
relative errors for various hypothetical calculations in the following table:
u v Absolute Relative
error error
1 0.99 0.01 0.01
1 1.01 0.01 0.01
−1.5 −1.2 0.3 0.2
100 99.99 0.01 0.0001
100 99 1 0.01
Evidently, when |u|≈1 there is not much difference between absolute and relative error measures.
But when |u|greatermuch1, the relative error is more meaningful. In particular, we expect the approximation
in the last row of the above table to be similar in quality to the one in the first row. This expectation
is borne out by the value of the relative error but is not reflected by the value of the absolute error.
When the approximated value is small in magnitude, things are a little more delicate, and here
is where relative errors may not be so meaningful. But let us not worry about this at this early point.
Example 1.1. The Stirling approximation
is used to approximate u = n! = 1·2···n for large n. The formula involves the constant e =exp(1) =
2.7182818.... The following MATLAB script. computes and displays n!andS
Downloaded 01/27/18 to 132.174.254.159. Redistribution subject to SIAM license or copyright;
4 Chapter 1. Numerical Algorithms
fact_n=factorial(n);
abs_err=abs(Sn-fact_n); % absolute error
rel_err=abs_err./fact_n; % relative error
format short g
[n; fact_n; Sn; abs_err; rel_err]’ % print out values
Given that this is our first MATLAB script, let us provide a few additional details, though we
hasten to add that we will not make a habit out of this. The commandsexp, factorial,andabs
use built-in functions. The command n=1:10 (along with a semicolon, which simply suppresses
screen output) defines an array of length 10 containing the integers 1,2,...,10. This illustrates a
fundamental concept in MATLAB of working with arrays whenever possible. Along with it come
array operations: for example, in the third line “.*” corresponds to elementwise multiplication of
vectors or matrices. Finally, our printing instructions (the last two in the script) are a bit primitive
here, a sacrifice made for the sake of simplicity in this, our first program.
The resulting output is
1 1 0.92214 0.077863 0.077863
2 2 1.919 0.080996 0.040498
3 6 5.8362 0.16379 0.027298
4 24 23.506 0.49382 0.020576
5 120 118.02 1.9808 0.016507
6 720 710.08 9.9218 0.01378
7 5040 4980.4 59.604 0.011826
8 40320 39902 417.6 0.010357
9 3.6288e+005 3.5954e+005 3343.1 0.0092128
10 3.6288e+006 3.5987e+006 30104 0.008296
The values of n! become very large very quickly, and so are the values of the approximation
. The absolute errors grow as n grows, but the relative errors stay well behaved and indicate that
in fact the larger n is, the better the quality of the approximation is. Clearly, the relative errors are
much more meaningful as a measure of the quality of this approximation.
Error types
Knowing how errors are typically measured, we now move to discuss their source. There are several
types of error that may limit the accuracy of a numerical calculation.
1. Errors in the problem to be solved.
These may be approximation errors in the mathematical model. For instance:
• Heavenly bodies are often approximated by spheres when calculating their properties;
an example here is the approximate calculation of their motion trajectory, attempting to
answer the question (say) whether a particular asteroid will collide with Planet Earth
before 11.12.2016.
• Relatively unimportant chemical reactions are often discarded in complex chemical
modeling in order to obtain a mathematical problem of a manageable size.
It is important to realize, then, that often approximation errors of the type stated above are
deliberately made: the assumption is that simplification of the problem is worthwhile even if
it generates an error in the model. Note, however, that we are still talking about the math-
ematical model itself; approximation errors related to the numerical solution of the problem
are discussed below.
Downloaded 01/27/18 to 132.174.254.159. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
✐
✐
✐
✐
✐
✐
✐
✐
1.2. Numerical algorithms and errors 5
Another typical source of error in the problem is error in the input data. This may arise, for
instance, from physical measurements, which are never infinitely accurate.
Thus, it may be that after a careful numerical simulation of a given mathematical problem, the
resulting solution would not quite match observations on the phenomenon being examined.
At the level of numerical algorithms, which is the focus of our interest here, there is really
nothing we can do about the above-described errors. Nevertheless, they should be taken into
consideration, for instance, when determining the accuracy (tolerance with respect to the next
two types of error mentioned below) to which the numerical problem should be solved.
2. Approximation errors
Such errors arise when an approximate formula is used in place of the actual function to be
evaluated.
We will often encounter two types of approximation errors:
• Discretization errors arise from discretizations of continuous processes, such as inter-
polation, differentiation, and integration.
• Convergence errors arise in iterative methods. For instance, nonlinear problems must
generally be solved approximately by an iterative process. Such a process would con-
verge to the exact solution in infinitely many iterations, but we cut it off after a finite
(hopefully small!) number of such iterations. Iterative methods in fact often arise in
linear algebra.
3. Roundoff errors
Any computation with real numbers involves roundoff error. Even when no approximation
error is produced (as in the direct evaluation of a straight line, or the solution by Gaussian
elimination of a linear system of equations), roundoff errors are present. These arise because
of the finite precision representation of real numbers on any computer, which affects both data
representation and computer arithmetic.
Discretization and convergence errors may be assessed by an analysis of the method used,
and we will see a lot of that in this text. Unlike roundoff errors, they have a relatively smooth
structure which may occasionally be exploited. Our basic assumption will be that approximation
errors dominate roundoff errors in magnitude in actual, successful calculations.
Theorem: Taylor Series.
Assume that f (x)hask +1 derivatives in an interval containing the points x
0
and x
0
+h.
Then
f (x
0
+h) = f (x
0
)+hf
prime
(x
0
)+
h
2
2
f
primeprime
(x
0
)+···+
h
k
k!
f
(k)
(x
0
)
+
h
k+1
(k +1)!
f
(k+1)
(ξ),
where ξ is some point between x
0
and x
0
+h.
Discretization errors in action
Let us show an example that illustrates the behavior. of discretization errors.
Downloaded 01/27/18 to 132.174.254.159. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
6 Chapter 1. Numerical Algorithms
Example 1.2. Consider the problem of approximating the derivative f
prime(x0) of a given smooth
function f (x) at the point x = x For instance, let f (x) = sin(x) be defined on the real line
−∞ 0 and ask how they decrease as h decreases. In other instances, such as
when estimating the efficiency of a particular algorithm, we are interested in a bound on
the work estimate as a parameter n increases unboundedly (e.g., n = 1/h).
For an error e depending on h we denote
Similarly, for w = w(n) the expression
w = O(n logn)
means that there is a constant C > 0suchthat
w ≤ Cnlogn
as n →∞. It will be easy to figure out from the context which of these two meanings is
the relevant one.
The Theta1 notation signifies a stronger relation than the O notation: a function φ(h)
for small h (resp., φ(n)forlargen)isTheta1(ψ(h)) (resp., Theta1(ψ(n))) if φ is asymptotically
bounded both above and below by ψ.
For our particular instance, f (x) = sin(x), we have the exact value f
)usingh = 0.1 is not very accurate. We therefore apply the same
algorithm using several increasingly smaller values of h. The resulting errors are as follows:
h Absolute error
0.1 4.716676e-2
0.01 4.666196e-3
0.001 4.660799e-4
1.e-4 4.660256e-5
1.e-7 4.619326e-8
Indeed, the error appears to decrease like h. More specifically (and less importantly), using
our explicit knowledge of f
primeprime
The quantity 0.466h is seen to provide a rather accurate estimate for the above-tabulated absolute
error values.
Downloaded 01/27/18 to 132.174.254.159. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
The damaging effect of roundoff errors
The calculations in Example 1.2, and the ones reported below, were carried out using MATLAB’s
standard arithmetic. Let us stay for a few more moments with the approximation algorithm featured
in that example and try to push the envelope a little further.
Example 1.3. The numbers in Example 1.2 might suggest that an arbitrary accuracy can be achieved
by the algorithm, provided only that we take h small enough. Indeed, suppose we want
vextendsingle
vextendsingle
vextendsingle
vextendsingle
cos(1.2)−
sin(1.2+h)−sin(1.2)
vextendsingle
vextendsingle
vextendsingle
vextendsingle
1 represents exponential growth.
An algorithm exhibiting relative exponential error growth is unstable. Such algorithms must
be avoided!
Example 1.6. Consider evaluating the integrals
Downloaded 01/27/18 to 132.174.254.159. Redistribution subject to SIAM license or copyright;
The proliferation in the early years of the present century of academic centers, institutes, and special
programs for scientific computing reflects the coming of age of the discipline in terms of exper-
iments, theory, and simulation. Fast, available computing hardware, powerful programming en-
vironments, and the availability of numerical algorithms and software libraries all make scientific
computing an indispensable tool in modern science and engineering. This tool allows for an inter-
play with experiment and theory. On the one hand, improvements in computing power allow for
experimentation and computations in a scale that could not have been imagined just a few years ago.
On the other hand, the great progress in the theoretical understanding of numerical methods, and
the availability of convenient computational testing environments, have given scientists and practi-
tioners the ability to make predictions of and conclusions about huge-scale phenomena that today’s
computers are still not (and may never be) powerful enough to handle.
A potentially surprising amount of attention has been given throughout the years to the defini-
tions of scientific computing and numerical analysis. An interesting account of the evolution of this
seemingly esoteric but nevertheless important issue can be found in Trefethen and Bau [70].
The concept of problem conditioning is both fundamental and tricky to discuss so early in
the game. If you feel a bit lost somewhere around Figures 1.4 and 1.5, then rest assured that these
concepts eventually will become clearer as we gain experience, particularly in the more specific
contexts of Sections 5.8, 8.2, and 14.4.