Next: 2 Numerical Integrals Up: Introduction to some Basic Previous: Introduction to some Basic

# 1 Numerical Derivatives

For simple numerical first derivatives, I recommend using the central difference formula. You can derive this by Taylor series expansion,

 f (x h) = f (x) hf'(x) + f''(x) + O(h 3) . (1)

Subtracting the two equations with the two different signs for h gives

 f (x + h) - f (x - h) = 2hf'(x) + O(h 3) , (2)

and solving for f'(x) ,

 f'(x) = + O(h 2) . (3)

To use this formula, we want to pick h small enough that the order h 2 error term can be ignored. However, we have to watch out for roundoff error too. Todays computers have 32 or 64 bit numbers. A single precision 32 bit floating point number has about 8 place accuracy, and a 64 bit double precision floating point number about 16 place accuracy. Let's see what happens if we evaluate the derivative of sin (x) at x = /4 in single and double precision. The results are shown in table 1.

 h f'(x) - /2 single precision f'(x) - /2 double precision 10- 1 - 1.2 x 10- 3 - 1.2 x 10- 3 10- 2 - 1.1 x 10- 5 - 1.2 x 10- 5 10- 3 1.3 x 10- 5 - 1.2 x 10- 7 10- 4 1.0 x 10- 4 - 1.2 x 10- 9 10- 5 - 7.9 x 10- 4 - 1.3 x 10- 11 10- 6 - 2.2 x 10- 2 - 4.5 x 10- 11 10- 7 1.9 x 10- 1 - 6.2 x 10- 10

Notice that the results, at first, get better as h is decreased, but the single precision result starts to get worse around h = 10- 3, and the double precision result gets worse around h = 10- 6. We can understand this from the order h 2 truncation error and the round off error from finite precision. First Notice that until the error grows it is being reduced by a factor of 100 for every factor of 10 change in h just as our central difference formula predicted. In addition each number will have a roundoff error of rf (x) where r is about 10- 7 for single precision and 10- 16 for double precision. We know that the subtraction of our two values will still have an error like this. Therefore, including round off error we have

 f'(x) = + + O(h 2) (4)

Therefore our error has the additional r/h term from round off error which grows as h is made smaller. If we assume correctly for our case that the constant of proportionality of the O(h 2) term (which is of course approximately proportional to the third derivative) and the value of our function are roughly unity, then the error is approximately

 Error + h 2 (5)

which will be minimized (ignoring factors of 2 at this point) when h is about r 1/3, and the error will be roughly r 2/3. For single precision, this means a minimum error at h = 10- 8/3 with a minimum error of about 10- 5; for double precision a minimum error at about h = 10- 16/3 with a minimum error of about 10- 11, more or less like we found.

The key point to this exercise is that you do not get to turn off your brain when you start the computer.

With this in hand let's calculate the gradient of a scalar function () . We first assume that it can be written in cartesian coordinates as (x,y,z) . The gradient at the point (x0,y0,z0) is

 (x0,y0,z0) = + + + O(h 2) (6)

We can also use these sorts of formulas to calculate the divergence and curl of vector fields, again assuming that both the field components and the coordinates are cartesian,

 (x0,y0,z0) = + + + O(h 2) (7)

 (x0,y0,z0) = - x + - y + - z + O(h 2) . (8)

Second derivatives can be calculated by applying the first derivative formulas twice, or equivalently by using the central second difference formula. To derive it from the Taylor series, simply add rather than subtract the two Taylor series of Eq. 1,

 f (x + h) + f (x - h) = 2f (x) + h 2f''(x) + O(h 4) (9)

and solving for the second derivative,

 f''(x) = + O(h 2) (10)

Notice in this case we expect the error including round off, if all the variables are around unity, to go like

 Error + h 2 (11)

Notice here that the optimal value of h is about r 1/4 and the error is about r 1/2. So you can't expect much better than half machine precision for second derivatives calculated this way, and h should be chosen to be roughly 10- 4 for 64 bit (double precision) numbers using this formula if your values and scales are around unity.

The Laplacian of a function can now be calculated as

 (x0,y0,z0) = + + + O(h 2) . (12)
You should notice that the second difference Laplacian is proportional to the difference between the average of the function at the six points given by changing x , y , or z by h , and the value of the function at the original point. If a function is known on the points of a cubic lattice, then the second difference Laplacian is proportional to the difference between the average of the function on its six nearest neighbor lattice points and the value of the central point.

We can also use our first derivative formulas twice. If we calculate the Laplacian by taking the divergence of the gradient, both calculated numerically from our central difference formulas above, we get the formula of Eq. 12 with h replaced by 2h . Calculating the divergence of the curl by substituting Eq. 8 into Eq. 7, shows that the central difference formulas evaluate to zero identically for this operation.

Next: 2 Numerical Integrals Up: Introduction to some Basic Previous: Introduction to some Basic