Fri - December 2, 2005

Lecture 36. 


There was no lecture today. A handful of people showed up and we discussed a diverse range of topics some course related and others not related to the course at all.

I will be around during the exam period. You may arrange for an appointment (by e-mail) if you have any questions.
I will hold an extra review class on Friday December 16 at 10:30. It will be held in AB-2 which is are normal classroom.  

Posted at 11:43 AM     Read More  

Thu - December 1, 2005

Lecture 35. 


Today we went over both the solutions to HW-12 and the practice final . There is no lecture planned for tomorrow, however, I will come to class to answer questions if there are any. There will be a pre-final Q&A period to be held on Friday Dec. 16 at 10:30. Stay tuned to this Web log for further details. 

Posted at 10:33 AM     Read More  

Tue - November 29, 2005

Lecture 34. 


I began with reviewing the normal equations used to fit a line to a set of data points. I then used the concise matrix notation to represent the line fitting equations. I pointed out how the matrix version easily generalizes for any linear combinations of functions, and in particular polynomial functions. I wrote out the underlying matrix for fitting a degree k polynomial to a set of data points. The matrix we got is Vandermonde and we know how those are likely to be ill-conditioned. Thus I mentioned in passing that there are other techniques for solving least squares curve fitting that are better conditioned.

I then showed how we could transform an exponential function into a linear function so that least squares line fitting could be applied.

Finally I demonstrated the censusgui program from Moler. I also showed solutions to the two questions from Moler on HW-12.

On Thursday I will continue with solutions to HW-12 and move on to the practice final if time permits. On Friday our last lecture I will finish up with the practice final. 

Posted at 11:49 AM     Read More  

Fri - November 25, 2005

Lecture 33 


Today the class wrote quiz 4. 

Posted at 09:41 AM     Read More  

Thu - November 24, 2005

Lecture 32. 


Solutions to HW-10 and HW-11 were presented. Note: I have added Recktenwald's solution to 11-22 to the PDF file. I did not explicitly do this in class, rather I did the solution to 11-21.  

Posted at 11:29 AM     Read More  

Tue - November 22, 2005

Lecture 31. 


I started with a geometric approach to derive the system of equations used to obtain the "best" line in the least squares sense that fits a set of data points.

I then used the normal equations approach to derive the same equations. The derivation expresses everything in terms of vectors and matrices and generalized very easily to the more general case where we are fitting a linear combination of functions to a set of data points.
 

Posted at 11:48 AM     Read More  

Fri - November 18, 2005

Lecture 30. 


Today we I gave a perfunctory overview of Euler's method for solving the initial value problem for ordinary differential equations. Along the way we saw how radioactive decay and compound interest are related. Needless to say Taylor's theorem once again was used to derive the numerical algorithm.

I displayed an applet that demonstrates how Euler's method (and some other numerical differential equation solvers) work.

Next week I will turn to the topic of least squares curve fitting.  

Posted at 03:27 PM     Read More  

Thu - November 17, 2005

Lecture 29 


Today I continued my presentation of Gaussian Quadrature. I went over the two node case. Once again I showed how solving a system of 4 non-linear equations one could obtain c1,c2, x1, x2, such that the value of the integral in an interval -1..1, could be obtained by the weighted sum c1*f(x1) + c2*f(x2). The value of the integral is exact for polynomials up to degree 3 and an approximation for arbitrary functions. I then showed how one can use a change of variable so that Gaussian quadrature can be applied to integrals over any interval a..b.

I continued with some demonstrations using Matlab. I demonstrated the use of the "int" function from the symbolic toolbox,
and anonymous functions, and array multiplication. (Do help int, help function handle, and help arith for Matlab help on these topics.) I also used a Recktenwald function 'GLTable' as a means of obtaining node and weight values for Gaussian quadrature.

Here is an editted listing of the Matlab session I did this morning so that you may explore these topics on your own.

%anonymous function f
f = @(x) x.*exp(-x)

X = linspace(-1,1);
Y = f(X);
plot(X,Y)

%symbolic integration of f
syms x
int(f(x),-1,1)
int(f(x))

RightAns = -2*exp(-1)

%Recktenwald's GLTable
help GLTable
type GLTable

%2 nodes
[x2,w2] = GLTable(2)
A2 = sum(f(x2).*w2)
RightAns

%4 nodes
[x4,w4] = GLTable(4)
A4 = sum(f(x4).*w4)
RightAns - A4

%integrating the same function on a different interval
X = linspace(0,5)
Y = f(X);
plot(X,Y)
int(f(x),0,5)
RightAns = -6*exp(-5)+1
T = linspace(-1,1)
scale = 5/2
offset = 5/2
YT = f(T.*scale + offset);
plot(X,Y,T,YT)

%4 nodes
A4 = sum(f(x4.*scale+offset).*w4)
A4 = A4*scale
RightAns

%8 nodes
[x8,w8] = GLTable(8)
A8 = sum(f(x8.*scale+offset).*w8)
A8 = scale*A8
RightAns - A8 

Posted at 10:00 AM     Read More  

Tue - November 15, 2005

Lecture 28. 


Today I finished the presentation I started last week on adaptive quadrature. Adaptive quadrature is a technique that automatically determines how many panels to use (and where to put them) so that a numerical integral is computed within the prescribed accuracy. We saw how the rounding error could be estimated, and how this estimate is used in a recursive adaptive Simpson's rule algorithm. I wrote out an algorithm for recursive adaptive Simpson's rule quadrature.

We then moved on to a different approach to numerical quadrature, Gaussian Quadrature. We began by looking at integrating a degree polynomial in the interval -1 .. 1. We evaluated the integral by hand and used the integral to derive four constants,
c1,c2, x1, x2, such that the value of the integral could be obtained by the weighted sum c1*f(x1) + c2*f(x2).

I will continue with Gaussian quadrature on Thursday and we will see how it can be used to integrate in a more general setting.
 

Posted at 12:09 PM     Read More  

Fri - November 11, 2005

Lecture 27. 


Lecture cancelled for Remembrance Day ceremonies. 

Posted at 10:29 AM     Read More  

Thu - November 10, 2005

Lecture 26 


The lecture today began with some demonstrations using Matlab. I ran Recktenwald's plotTrapInt and
plotSimpInt on the 'humps' function, in the interval 0..1. This illustrated the technique of breaking up an interval into panels and applying an integration rule to each panel. This is called composite quadrature. I then used Moler's quadgui to perfomr the same integration but using an adaptive Simpson's method.

Adaptive quadrature is a technique that automatically determines how many panels to use (and where to put them) so that a numerical integral is computed within the prescribed accuracy. We saw how the rounding error could be estimated, and how this estimate is used in a recursive adaptive trapezoid rule as well as a recursive adaptive Simpson's rule algorithm.

Tomorrow's class is cancelled (as are all Queen's classes at 10:30 -11:30) so that you may attend Remembrance Day ceremonies.  

Posted at 12:04 PM     Read More  

Tue - November 8, 2005

Lecture 25. 


Today we began our exploration of numerical integration, also known as numerical quadrature. The midpoint, trapezoid and Simpson's methods are three examples of a collection of rules known as Newton-Cotes formulas.

I defined the rules, and then we used them to integrate the function x3. The outcome was rather surprising as the error for the one point rule was smaller than that for the two point rule. Furthermore, Simpson's rule using a quadratic interpolant integrated a cubic polynomial with zero error. This is not just a cooked up example but a property of Simpson's rule.

The error bounds for the Newton-Cotes rules are derived by integrating a Taylor expansion. I tried to give some intuition as why Simpson's rule is perfectly accurate up to cubic polynomials by showing how the odd terms in the integral of the Taylor polynomial vanish.

On Thursday we will look at the composite rules.
 

Posted at 12:02 PM     Read More  

Fri - November 4, 2005

Lecture 24. 


Quiz 3 was written today. 

Posted at 11:52 AM     Read More  

Thu - November 3, 2005

Lecture 23. 


Solutions to HW-7 and HW-8 were presented.  

Posted at 10:42 AM     Read More  

Tue - November 1, 2005

Lecture 22 


Today I continued with a treatment of cubic spline interpolation. I worked through 3 knot and 4 knot examples so that I could write out the equations that need to be solved. I then discussed three end conditions that can be used with cubic spline interpolation.
 

Posted at 12:45 PM     Read More  

















©