首页 >
> 详细

Chapter 1

Practical Exercises

Introduction

This document contains the practicum assignments of this course.

From now on, the ‘real’ numerical part is going to start: we consider problems from mathematics,

all of which originate from physics, chemistry and/or engineering, where in general,

we cannot find closed-form solutions. In such cases, we have to use the computer to find an

answer, together with a numerical method. In Matlab, many of those numerical methods

are built in, and we will use them as a tool.

However, there is a problem: whatever you do, and whatever mistakes you make, the

computer will nevertheless give an answer, and that answer may have nothing to do with

the answer you were looking for. That’s why in the exercises you are asked to critically

examine your answers, and adapt your answers accordingly.

Whenever an answer with error is requested, we always want to see the answer

= A ± ∆ in such a form, that ∆ has 1 significant digit and that the number of

digits of A is in accordance with that.

Example:

approximation = 1.234567890 ± 0.000067890 leads to answer = 1.23457 ± 7 · 10−5

.

Indeed, the (wrong!) answers

answer = 1.23457 ± 0.000067890 and answer = 1.234567890 ± 7 · 10−5

do not look consistent!

1

1.0 Matlab

1.0.5 Executing a part of a program many times

In programs, you often want to perform certain calculations very often, possibly each time

slightly different. For example if you want to add up the first billion natural numbers. In

that case you know how often it has to be done (namely, one billion times), but sometimes

you don’t and then you use a while (because there is no alternative!); in the first case, you

use a for.

While statement

The way of writing a while (’zolang’) is as follows:

while(expression)

do;

end

As long as expression is true, do is executed. THUS, the following program won’t stop

(except when pulling the plug, or Ctrl-Alt-Del):

i=1;

while(i>0)

i=i+1;

end

So the while statement is very error-prone, therefore avoid it if possible.

For statement

The following example hopefully speaks for itself and adds up the first billion natural

numbers (500000500000): 1

1Using File-Preferences...-Command Window, set the Numeric Format to ‘long e‘ and the Numeric Display

to ’loose’, and press Apply and OK!! It can also be done with the Matlab command >> format long

e.

2

sum=0;

for i=1:1000000

sum=sum+i;

end

Exercise 1.0.6. Write a function Q(m, n) which computes Pn

k=m k−1

for various m, n.

Now compute z = Q(1, 109) − log(109) (analytical answer Q(1, n) − log(n) as n → ∞:

γEuler = 0.5772156649015328606065121 . . .)

1.1 Error analysis

1.1.1 Condition of linear equations

In this exercise, we will look more closely at solving linear equations:

solve x from: A · x = b.

If the matrix is nonsingular, then (mathematically) the system can be solved exactly. On

a computer however, the answer is not exact in general.

It is therefore an important question how good the solution we find is, or, stated in a slightly

different way:

• How sensitive is the answer x under small perturbations of e.g. the right hand side

b?

2

Slightly more mathematically formulated:

• How large is δx := R(b + δb) − R(b) in relation to δb, or: to what extent is the output

perturbed as a result of a perturbation of the input?

If the perturbation of the output is small, relatively speaking, in relation to the perturbation

of the input, also relatively speaking, then we say the problem is well-conditioned. If that

ratio is large, then the problem is said to be ill-conditioned.

We will illustrate some things using the problem above. As matrix A we use a so-called

Hilbert matrix. Hilbert matrices are known for being ill-conditioned. Now we choose

the right hand side vector b such that b = A · (1, 1, . . . , 1)t

,

3

so the exact solution x =(1, 1, . . . , 1)t

is known.

Write a function which, given n,

1. determines the n × n Hilbert matrix A, (help hilb),

2. determines the n × 1 vector x with ones, (help ones),

2

Indeed, it is almost always the case that b is not exactly known, for example because of round-off errors.

3

In Matlab, [1, 1, 1] is a row vector, so with size 3 × 1, and [1; 1; 1] a column vector, so with size 1 × 3.

Hence, we have that [1, 1, 1] = [1; 1; 1]t

.

4

3. computes the right hand side b, which equals the exact solution x = (1, . . . ,1)t

,

4. subsequently determines the actual numerical solution xn: xn = A\b,

5. finally computes the quantities kA · xn − bk ´and kxn − (1, . . . , 1)tk, (help norm).

Complete the following table by calling the function for different values of n. Here, δb :=A · xn − b can be interpreted as the perturbation of the input which causes a perturbation

δx := xn − (1, . . . , 1)t of the output, since by linearity:

δx = R(b + δb) − R(b) = A

Note that, although the numerical solution xn seems good because kA · xn − bk ≈ 0 (the

type double implies approximately 15 significant digits), we see that, particularly when

n > 10, this need not be true! That is the consequence of an ill-conditioned system.

1.1.2 Accuracy and condition

Sometimes an answer must be given very precisely, such as

answer = 1.234567890123456 · 10−4,

but sometimes, only the order of magnitude suffices:

answer = 1 · 10−4.

The number of digits you include depends on the accuracy of the result, or how accurate

you think it is.

In some cases, the computation of a function value can be strongly influenced by:

5

1. the way a computation is performed, and/or

2. the order of the operations, and/or

3. round-off errors.

We are going to examine this using the computation of the polynomial R(x),

R(x) = 177 − 21x + 3(9x + 550)2 − 38x

3 − 558x

4 + 2(3x + 434)5 − x

6

for x = 972 in the following 4 ways:

• Algorithm A: by “completely expanding” every term,

for example: 558x

4 = 558 * x * x * x * x ,

or: 2(3x + 434)5 = 2 * h * h * h * h * h with h = 3*x + 434.

• Algorithm B: using the standard functions exp( ) and log( ) (’natural logarithm’!),

for example: 558x

4 = 558 e

4 ln x = 558 * exp(4 * log(x)) ,

or: 2(3x + 434)5 = 2 e

5 ln(3x+434) = 2 * exp(5 * log(3*x + 434)).

You also have to compute the value of the polynomial by interchanging terms:

• R1(x) = 177 − 21x + 3(9x + 550)2 − 38x

3 − 558x

4 + 2(3x + 434)5 − x

6

• R2(x) = −x

6 + 2(3x + 434)5 − 558x

4 − 38x

3 + 3(9x + 550)2 − 21x + 177

For every possibility, write a separate function (thus, four functions).

method answer that Matlab gives

C[R1A(972)]

C[R1B(972)]

C[R2A(972)]

C[R2B(972)]

The correct result is R(972) = 1.

6

With one of the outcomes, namely C[R . . .(972)] we obtain an estimate of the condition

number:

It is given that, on basis of (??), we find as exact value cP (972) ≈ 1.4 · 1018

.

Do you prefer one of the ways of computation? Yes / no , because . . . . . . . . . . . . . . . . . . . . . .

1.1.3 Problems and algorithms

In this exercise, we want to make clear the difference between well- and ill-conditioned

problems, but also between good and bad algorithms: if you want to compute something,

you can do that in a stupid way, but sometimes also in a (more) intelligent way.

As an example, we are interested in the following integrals

yn := Z 1.

In principle, all these integrals can be determined analytically, but looking at the solution

of the integral in this exercise, you won’t like that (except for a few mathematicians).

We consider the problem P:

for which we have that (we have calculated that using Maple, not by hand!)

y13 = −2541865828329 log (10) + 5083731656658 log (3) + 10723204268006567

40040.

Of course, we are much more interested in the numerical value of the integral.

Part A

The following trick provides a way to compute y13.

we can determine y13 from the recursion: that is, from

yn = −9yn−1 +1n(1.1.1)

with n = 1 we can compute y1; subsequently, using (1.1.1) and n = 2 we can compute y2

as well, etc. Equation (1.1.1) is called the recursion formula.

Hence, this algorithm, which we will call algorithm A, has the following form:

A : yn = −9yn−1 +1n, n = 1, 2, . . . , 13, and y0 = log(10/9). (1.1.2)

If we execute this algorithm on a computer, than we must realise that errors occur immediately

when computing y0 numerically, namely a round-off error. And this error, and all

other round-off errors which we will make thereafter, propagate to the final result. Then

the question is: how strongly will this propagate? Or, in other words, is this algorithm

sufficiently stable or not (well- or ill-conditioned)?

A: Numerical experiment

Write a Matlab program, which executes algorithm A, and write down (in 16 decimals; use

format long e) the results for y0 and y13:

Algorithm A :

Answer (using common sense) the following question: The approximation of y13 is (good /

reasonable / very bad)daa.

4

In order to ‘measure’ the stability of the algorithm, we now do the following: we perturb

the value of the input (that is, y0) on purpose, and observe the effect on the answer, y13.

Execute the same Matlab program with algorithm A, but now with y, and write down the results again:

Algorithm A

4daa = delete as appropriate.

8

And answer (using common sense) the following question: This y

∗

13 as an approximation of

y13 is (good / reasonable / very bad)daa.

A: Stability

Using these answers, we can estimate the condition of the algorithm A 5

(and using that,

say whether the algorithm is stable):

1. PA is the solution of the problem P when using algorithm A, so y13,

2. x is the input, so here y0,

3. || is the relative error in the input.

From this, the approximation follows:

cA,P ≈ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

And we can go one step further: the y0 = log(10/9) which we have used in the algorithm

A to compute y13 isn’t exact either! The relative error in y0 will have at most the value

11C, with machine precision C ≈ 2.2 · 10−16, because

C[y0] = log(10/9 (1 + 1)) (1 + 2) = . . . ≈ y0 + 1 + y02 , with |i

| ≤ C.

What can you say about the possible error in y13 as computed using algorithm A?

relative error in y13 ≤ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

and hence,

absolute error in y13 ≤ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Is the value of y13 we have found with this error estimate consistent with the exact value

of y13?

(yes / no)daa , because y13 = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

NB. You can compute the exact value with the function quad in Matlab (see help quad?)

using the call:

5Actually, we should say: the condition of problem P when we solve the problem using algorithm A.

9

>> quad(@(x) x.^13./(x+9),...)

A: Stability explained

An absolute error dn−1 in yn−1 propagates to the error of yn during one step of algorithm

A, use (1.1.2):

yn + dn = −9(yn−1 + dn−1) + 1

n

, so dn = . . . . . . dn−1. (1.1.3)

and hence

d13 = . . . . . . d0. (1.1.4)

And this (is / is not)daa consistent in good approximation with the data:

the true absolute error in y13 = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

and using the above, we find that this should be

the expected absolute error in y13 = d13 = . . . . . . d0 = . . . . . . C = . . . . . . . . . . . . . . . . . .

Part B

Now, we ‘turn around’ the equation (1.1.1) and we are going to use it for B:

For further investigation of this recursion, we will, assuming a chosen initial value yN ,

continue the backward recursion up to y0. The result obtained can be compared with the

exact value, namely log(10/9) = 0.10536051565782630122.

B: Numerical experiment

We choose N = 25 and as an approximation of y25, we choose for convenience the value 0.

Write down the following outcomes of the algorithm B:

Algorithm B :

B: Stability

If we start with y25 = 10000, the same correct value for y0 is found (verify it) !!?? It follows

that (the stability of) algorithm B is much better than (that of) algorithm A. Estimate,

just as in the equations (1.1.3)-(1.1.4), the error amplification/reduction when computing

y0 given an absolute error d25 in y25

d0 = . . . . . . d25,

and if the error in y25 approximately equals 10000, it follows that

d0 = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

and this (is / is not)daa consistent with the value found.

What should we choose for N in order to find y13 in 15 significant digits as well, assuming

yN = 0?

N = is sufficient because . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Conclusion: It follows from the above that the condition of the problem “compute y13 = R 10x13x+9 dx” (is good / is good, provided that the correct algorithm is used / (strongly)

depends on the algorithm that is used)daa (choose one answer!).

11

1.2 Errors in numerical differentiation

Later in the course, we will repeatedly encounter one of the most practical problems in

numerical mathematics, namely solving ordinary and also partial differential equations

(ODE, PDE).

One of the numerical methods which is very popular, is called discretisation. In that case,

you look for an approximate solution to a differential equation which you only know at

a finite number of points, and not at ´every point. And because you have to compute

derivatives of functions as well (after all, a DE is an equation which contains derivatives!)

it is necessary to be able to estimate derivatives on the basis of a function which you only

know at a number of points.

We consider the following formula to determine the third derivative of a function f, or the

value of f

(3)(x0):

D3(h) = f(x0 + 2h) − 2f(x0 + h) + 2f(x0 − h) − f(x0 − 2h)2h3= f(3)(x0) + C · h2 + O(h4), C = f

(5)(x0)/4.

From this formula (plus error), at first sight you would say that you have to make h as

small as possible. After all, the error O(h

2

) then becomes smaller and smaller.

Remarkably (?) enough, this is not correct, but that is because we also make round-off

errors. That’s what we want to investigate in more detail in this exercise.

By means of an error analysis, we want to determine the optimal choice for h. Thereafter,

we verify this analytical result with a simple experiment.

If we compute D3(h), then we make errors. For convenience, in the following error analysis

we only consider the round-off error in computing the function values of f and we make

the rough assumption

absolute error in computing f(x) ≈ |f(x0)| C.

Then it follows that:

|computed D3(h) − f

(3)(x0)| ≈ |C| h

2 + . . . . . . C

Hence, the error we make is the sum of the last two terms. This is minimised as function

We are going to verify the analytical expressions we have found experimentally. Compute

D3(h) as approximation of the value f(3)(1) of f(x) = exfor h = ( 12)j, j = 0, 1, 2, . . . , 20.

The best value of h which follows from the experiment is h ≈ . . . . . . . . . . . . . . .

In this example, the best value for the analytical derivative is h ≈ . . . . . . . . . . . . . . .

Conclusion: the experimental results are consistent with the theoretical values. The minor

deviations are caused by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

1.3 Interpolation

One of the ways to draw a smooth function through discrete points is by means of interpolation.

Usually, polynomials are used for that (but certainly not always!) and for now, we

restrict ourselves to that.

Interpolation means that that smooth function has to pass through the data points; a

natural property, at least if the data doesn’t contain much noise (we will address that

problem in section 1.6).

1.3.1 Polynomial interpolation

In order to investigate the properties of interpolation (e.g. how well the method works, and

what you have to do, but also what you shouldn’t do), we proceed as follows.

We choose a function with certain properties (e.g. a simple function, a smooth function,

or a wild function – we will come back to what this exactly means mathematically later),

and we draw discrete data from that, that is, we take as data the values of the function at

a certain (finite) set of points. Thereafter, we are going to test our method (in this case,

determining a polynomial that passes through those points) to find out how good it is. 6

And we can also (try to) analyse how we should choose the data points in order to obtain

an optimal result, i.e. that the resulting function lies ‘as close as possible’ to the original

function.

We need the following ingredients:

1. a function from which we draw the data,

2. a grid, i.e. the set of points where we assume the function values are given,

3. a numerical code which computes the polynomial (the interpolant),

4. and subsequently visualises the results – you can read off the error from this.

Instructions for the outline of an M-file (or multiple M-files):

1. write a function f which determines, for given values of x (a vector), the associated

function values (and this function is adapted repeatedly),

6

In reality, we don’t know the function of course, but then we do know where the data comes from, which

might give us a hint of the function associated with it.

14

2. we consider two types of grids, namely an equidistant grid

xk = −1 +2kn∈ [−1, 1], k = 0, . . . , n,

and a so-called Chebyshev grid:

xk = − cos((2k + 1)π

2n + 2

) ∈ [−1, 1], k = 0, . . . , n;

do this in two functions, with argument n, which outputs a set of points x, y (in

those functions, first create a set k=[0:n], and subsequently the grid x and using

that, the function values y; see also the Matlab exercise ??

NB. when using a for-statement to fill the vector x, you should note that every index

should be a positive integer, so x(1) = x0, x(2) = x1, etc.)

3. the Matlab code

p = polyfit(x,y,n);

computes the coefficients of the polynomial p that interpolates,

4. and since Matlab works completely discrete, we create another data set xf, finer than

the given data set, to compare the result (polynomial) with the original function, e.g.

xf = -1:0.001:1;

pf = polyval(p,xf);

ff = f(xf);

plot(xf,pf,’o’,xf,ff,’-’)

You also can change the latter such that you see the plot of the difference (how?).

We hope that the programming didn’t take too much effort (surely not – they are just a

few lines) and we can now start to work. We consider data from different functions. One

more notation: ∆pn means the maximum value of the truncation error of the problem, for

given n and f, for the equidistant grid, ∆qn for the Chebyshev grid.

i. Data: n ∆pn

f(x) = x

3 1

x ∈ [−1, 1] 2

3

The program works well because . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

The interpolation polynomials for n = 1 and n = 2 are identical since (use symmetry!)

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ii.

Data: n ∆pn ∆qn

f(x) = e

x

(= exp(x)) 4

x ∈ [−1, 1] 8

12

The truncation errors are consistent with the theory, because it should hold that (in

formulas):

∆pn ≤ . . . . . . . . . and ∆qn ≤ . . . . . . . . .

iii-a. Data: n ∆pn ∆qn

f(x) = 1

25x2+1 4

x ∈ [−1, 1] 12

20

iii-b. Data: n ∆pn ∆qn

f(x) = 1

x2+1 4

x ∈ [−1, 1] 12

20

The truncation errors in iii-b are significantly smaller than those in iii-a. Can you

give an explanation for that?

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16

The graphs of the difference functions show an oscillating behaviour in all cases (i,ii and

iii). Nonetheless, there is a remarkable difference between graphs based on an equidistant

grid on the one hand, and those based on a Chebyshev grid on the other hand, namely

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.3.2 Interpolation with rational functions

We consider the interpolation problem once more (and we also use the notation from the

previous section and the M-file(s), but make sure you make a backup of the programs).

Even if a function behaves ‘neatly’ on the interpolation interval, then that is not a guarantee

for a good interpolation (with a small truncation error). For example, a singular point

(“pool”; at that point, the function has an asymptote) just outside the interpolation interval

has quite some influence on the truncation error. We illustrate this using the function tan x

(with an asymptote at x = π/2).

We only use an equidistant grid (with interpolation polynomial pn and truncation error

∆pn). We now take [1, 1.5] as interpolation interval (so, not [−1, 1]: adjustments?), so that

the point π/2 = 1.57 . . . lies outside, but very close to the interval.

Data: n ∆pn

f(x) = tan x 6

x ∈ [1, 1.5] 9

12

It is (has become) clear that the interpolation of tan x on [1,1.5] is being influenced quite

strongly by the singular point at π/2, even for n = 12. Can we improve this?

Instead of approximating the data set tan xi

, we do the following: we consider the data

(xi − π/2) tan xi

, xi ∈ [1, 1.5]

or in other words, we interpolate data drawn from this function

g(x) = (x − π/2) tan x

This function doesn’t have a singularity anymore at the point π/2, and therefore this data

can be interpolated more easily.

17

It seems a bit silly to approximate data which we cannot approximate well by changing the

data itself, but it is not:

1. we draw data from g(x) = (x − π/2) tan x.

2. we interpolate that data just as before; this yields a polynomial p(x).

3. we divide this polynomial again by x − π/2:

r(x) = p(x)

x − π/2

We now consider this function to be an approximation of the data drawn from f(x) =

tan x. This function has the form of a polynomial divided by a polynomial, and for

that reason we cannot speak of polynomial interpolation anymore.

Indeed, a function of this type is called a rational function.

Show that r(x) interpolates the original data drawn from f:

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Adjust your program such that you compute r(x) and subsequently, complete the following

table: ∆rn now means the absolute error of r(x) against f(x).

Data: n ∆rn

f(x) = tan x 6

x ∈ [1, 1.5] 9

12

If everything went well, you observe better convergence behaviour! What you should notice

(remember for the future) is that polynomial interpolation is not the Holy Grail. On the

other hand: here, we knew that the data was drawn from a function with a singularity

nearby, and we could adjust our method accordingly. In reality however, you often don’t

know that to do.

18

1.4 Extrapolation

Extrapolation is a simple technique to obtain better results with ‘little effort’, and to obtain

error estimates. Because this technique can be applied to many problems, we will encounter

extrapolation in the next sessions as well, but then as a part of a problem! In the problem

below, extrapolation is the central element.

Compute xn for the following values of n = 1, . . . , 1024:

Do this with the help of the following questions:

1. we have that x1 = . . . . . .

2. the relation between xn and xn−1 is the following

xn = Cn xn−1 with Cn = . . . . . .

3. Why is it more efficient to use these two relations instead of the definition(1.4.1)?

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4. A few values that you have found:

Of course, it is impossible to compute the limit x∞ explicitly on a computer, and in this

case you see from the numbers that the solution is not yet in sight. Therefore, we first

investigate the kind of convergence.

19

5. In order to estimate how fast the series converges, we consider, just as in the lecture

notes, the following quotients:

Numerically, we find from the previous table:

Both quotients are so-called convergence factors. We conclude from the values in the

last table that the process converges as ( 1n/1n2 /1n3 )daa. In other words: the convergence

is (linear/quadratic/cubic)daa as a function of n.

6. On the basis of the result above we can perform another extrapolation: we can approximate

x∞ better with the following linear combination:

yn = . . . . . . xn − . . . . . . xn/2

7. Using this last number, we can extrapolate another time:

x∞ ≈ . . . . . . y512 − . . . . . . y256 = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

but also en estimate of the error: the final answer (which is familiar to you, hopefully)

x∞ = . . . . . . . . . . . . . . . . . . . . . . . . ± . . . . . . . . . . . .

20

1.5 Boundary Value Problems

1.5.1 A simple example

In example ??, the boundary value problem

y

00(x) = x · y(x) − 5 , 0 ≤ x ≤ 1 , y(0) = 0 , y(1) = 0 ,

has been discussed extensively. Application of the finite difference method led to a system

of linear equations A · y = r.

The solution of this system (together with the two boundary conditions) is a numerical

approximation of the exact solution y(x), x ∈ [0, 1], of the boundary value problem, which

is plotted in figure ??.

It is now intended that you devise a strategy yourself for the question: give the best

possible approximation of the solution in the middle of the interval, including a reliable

error estimate.

Instructions:

- fill A and r in Matlab, and determine the value of the solution in the middle of the interval,

i.e. with Pn = yn/2

.

- for given n, define the matrix A as follows:

A = sparse(n-1,n-1);

A(1,1) = ....;

..... % set the nonzero elements of A (for loop?)

A(n-1,n-1) = ....;

full(A) % check: take fore example n = 10 so that h = 1/10

Now complete the following:

I can determine the order of this method by e.g. carrying out the experiment for the following

values of n ≤ 128: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Therefore I have run the program for the following values of n ≤ 128, with corresponding

results:

n = . . . . . . with Pn = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

n = . . . . . . with Pn = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

n = . . . . . . with Pn = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

n = . . . . . . with Pn = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

The best answer, resulting from these measurements, is:

y(0.5) = . . . . . . . . . . . . . . . . . . ± . . . . . . . . .

Both those values are obtained as follows:

The order of the method equals . . ., because . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Therefore the best value is determined from . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

with error estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.5.2 The “buckling” problem

Today, we are trainee research assistant (AIO) at prof. Van Huetink’s group (UT, mechanics,

strength of materials). In order to get trained, we have to do a first calculation on the

“buckling” problem (with thanks to Kees Vuik, TUD).

We consider a rod with length L, width b and height d. It is clamped to the wall at the left

end (x = 0), and at the other end (x = L), pressure S is applied to the rod. The rod will

also bend by gravity.

We want to know how the rod behaves for different values of S, i.e. how the deflection y

depends on the place x under the influence of the pressure S.

We can formulate the problem mathematically as

(0) = 0. E is the elastic modulus of the material, K the moment of inertia,

and the term with q is responsible for the gravity:

where ρ is the density and g the gravitational constant

联系我们

- QQ：99515681
- 邮箱：99515681@qq.com
- 工作时间：8:00-23:00
- 微信：codinghelp

- Csci 340作业代做、代写java程序语言作业、代做java实验作业代写 2019-12-12
- Data课程作业代做、代写nbershade作业、代做r课程设计作业、代写r 2019-12-12
- 代写csci 1100作业、Program课程作业代做、Python语言作业 2019-12-12
- Data留学生作业代做、代写sql实验作业、Sql编程有作业调试、Pseud 2019-12-12
- 代做g6077留学生作业、System课程作业代写、代做web编程语言作业、 2019-12-12
- 代写comp529作业、代做analysis留学生作业、代写java语言作业 2019-12-12
- Ce235留学生作业代写、Program课程作业代写、C/C++程序语言作业 2019-12-12
- 代写system留学生作业、代做python语言作业、代写java，C/C+ 2019-12-12
- 代写ma705留学生作业、代写python程序语言作业、代写python实验 2019-12-11
- Stat 3312作业代做、R语言作业代写、代做r编程设计作业、代写sas 2019-12-11
- Comp201作业代做、代写software Engineering作业、J 2019-12-11
- Statistics 3022作业代做、代写data留学生作业、R编程设计作 2019-12-11
- 代写canvas留学生作业、Python, R，Matlab编程作业代做、代 2019-12-11
- Cs 112留学生作业代做、Program编程语言作业、Python程序语言 2019-12-11
- 代写fre6831留学生作业、代做python程序语言作业、Python实验 2019-12-11
- Mathjax课程作业代写、代做html，Css作业、代写r编程设计作业、代 2019-12-11
- Stsci 5060作业代做、Sql编程语言作业调试、代写sql课程设计作业 2019-12-11
- 代做parser留学生作业、Programs课程作业代写、代写c++实验作业 2019-12-10
- 代写econ215留学生作业、代写python课程设计作业、Python编程 2019-12-10
- 代写databases作业、代做java，Python编程设计作业、代写c/ 2019-12-10