Welcome 


There have been updates to the web page . Note the new quiz dates, TA information, and a link to the Webct page for this course. I have also been informed of a new Academic Resource Site .

Welcome to the CISC-271 BLOG for the year 2006. I will summarize classroom activity in this space. I will use this category "Announcements" for general information. The "Lecture" category will hold most of the entries. These lecture summaries will be useful to keep your place in the course, but are by no means intended to fulfil the role of class notes.

Motivation

Mathematical problems arise is all fields of science and engineering. Often a continuous mathematical model is obtained that is either difficult or impossible to solve analytically. Thus numerical methods are used to obtain approximate solutions. Numerous software libraries and applications have been developed to relieve the practitioner from the subtle complexities that arise in the implementation of these numerical solutions. This course introduces common numerical algorithms, how they work, their shortcomings, and how to use them effectively. Matlab, a widely used mathematical software package, provides students with a platform to experiment with the use of state of the art numerical routines. It is is also used as a programming environment for implementing some of the algorithms that are studied.  

  Read More  

Fri - October 20, 2006

Lecture 18. 


Interpolating with the Newton basis.
Today I showed how Horner's rule can be used to evaluated the interpolating polynomial using the coefficients of the Newton basis. The advantage of using Horner's rule is misstated in the Ellis notes. Ellis remarks that Horner's rule uses half the number of multiplications when in fact it uses roughly the square root of the number of multiplications when compared to evaluating a polynomial naively.

Demonstrations with Matlab
Today we saw some illuminating demonstrations, using Matlab, pertaining to polynomial interpolation.
We started with an exploration of full degree polynomial interpolation and the demonstration programs from Recktenwald:
demoGasNewt, demoGasLag, and demoGasVand. The main observation is that the poor scaling of the Vandermonde matrix yields visual artifacts in the polynomial plot. Then we looked at Moler's interpgui. This emphasized the difference between local interpolation methods, such as straight line and Hermite interpolation, and interpolation where data points effect the interpolation curve globally, such as full degree polynomial interpolation and cubic splines.

Piecewise Polynomial Interpolation
Hermite interpolation and straight line interpolation are piece-wise polynomial interpolation methods. They both use polynomial functions to locally interpolate each of the pieces.

Hermite interpolation is named after the French mathematician Charles Hermite. Let me emphasize that for material on piecewise polynomial interpolation the notation Pi(x) the i in the subscript refers to the polynomial interpolating the ith piece, and not the degree of the polynomial. For each piece a separate cubic polynomial is found.  

Posted at 03:24 PM    

Wed - October 18, 2006

Readings for Week 7. 


Key:
Ell = Randy Ellis, Scientific Computing Notes
Rec = Gerald Recktenwald, Numerical Methods with Matlab


Week 7:
Topics:- Piece-wise polynomial interpolation.


Ell covers piece-wise polynomial interpolation in classes 15 and 16. There is no mention in Ell, of solving the spline coefficients using a tri-diagonal matrix.

Rec covers Hermite and cubic spline interpolation in Chapter 10 sections 10.3 Section 10.4 discusses the built in Matlab interpolation routines.  

Posted at 02:51 PM    

Homework for Week 7. 


These questions deal with piece-wise polynomial interpolation. See the attached PDF file. HW7.pdf  

Posted at 02:49 PM    

Lecture 17. 


Polynomial Interpolation with the Lagrange basis.

Today we saw that we can use the Lagrange basis for polynomial interpolation. The Lagrange basis is not likely to cause numerical errors as is the case when using the monomial basis for polynomial interpolation.

Polynomial interpolation with the Newton basis

Today we looked at Newton divided differences. I wrote out the divided difference for a degree two polynomial, but I did not derive it on the board. Please note the derivation in Recktenwald leads to a formulation that does not match the usual divided difference form. There is also an error in the way f[x1,x2,x3] is defined on page 540. If you follow the derivation the first term in the numerator should be f[x1,x3]. This error is reported in the errata updates on the NMM web site.

 

Posted at 01:50 PM    

Mon - October 16, 2006

Lecture 16. 


Polynomial Interpolation
I went through an application first. The matlab that I used is stored in the attached m-file.
InterpExample.m.pdf
We then saw how to use Vandermonde matrices to obtain an interpolating polynomial with the,
monomial basis. Along the way we performed a complexity analysis of this technique. To do this we needed to explore evaluating a polynomial using Horner's method. I then did an example of linear interpolation in order to develope the technique of interpolating with the Lagrange Basis.  

Posted at 05:01 PM    

Fri - October 13, 2006

Lecture 15. 


The second quiz was written today. Early reports from the markers are that the questions on the 3rd page of the quiz were answered well.
The quizzes should be ready for return on Monday. 

Posted at 04:15 PM    

Homework for Week 6 


These questions deal with polynomial interpolation using the monomial, Lagrange, and Newton bases. HW6.pdf  

Posted at 04:13 PM    

Readings for Week 6. 


Key:
Ell = Randy Ellis, Scientific Computing Notes
Rec = Gerald Recktenwald, Numerical Methods with Matlab

Week 6:
Topics:- Polynomial interpolation using the monomial, Lagrange, and Newton Bases.

Ell covers polynomial interpolation in classes 9, 10, and 11. There is no mention in Ell, of using Vandermonde matrices to interpolate with the monomial basis.

Rec covers the three methods of interpolation in Chapter 10 sections 10.1 and 10.2.  

Posted at 04:03 PM    

Wed - October 11, 2006

Lecture 14. 


I reviewed the solutions to homework for weeks 4 and 5. They are attached below.
I also briefly discussed the connection between condition number and the floating point accuracy of Gaussian elimination. Sol4.pdf


Sol5.pdf  

Posted at 01:42 PM    

Mon - October 9, 2006

Lecture 13. 


Thanksgiving, no lecture. This holds a place so that lecture numbers retain the property that

Lecture# mod 3 = 1, 2, 0, = > Mon., Wed., Fri. 

Posted at 01:40 PM    

Fri - October 6, 2006

Lecture 12. 


We continued where I left off on Wednesday with some details on how to use permutation vectors and/or permutation matrices.
I illustrated the use of Matlab lu and Recktenwald's lupiv.

The discussion turned to some linear algebra and properties of systems that have unique solutions, no solutions, or infinitely many solutions. Some simple two dimensional examples were given as visual tools. Floating point arithmetic and rounding error introduces the concept of systems that may be ill conditioned.

We looked at vector and matrix norms,so that we could discuss conditioning and condition numbers, and how they effect the accuracy of a solution of a system of linear equations. 

Posted at 01:04 PM    

Readings for Week 5. 


Key:
Ell = Randy Ellis, Scientific Computing Notes
Rec = Gerald Recktenwald, Numerical Methods with Matlab


Week 5:
Topics:- Norms and condition numbers.

Class 28 of Ell covers norms and condition numbers.

Vector and matrix norms are covered in Rec sections 7.1.2 and 7.2.4 respectively. Section 8.3 of Rec discusses conditioning and the effects of small perturbations on ill-conditioned systems.  

Posted at 11:12 AM    

Homework Week 5. 


This homework is about norms and condition numbers.


Recktenwald Chapter 8. questions 22 and 35.

For question 22 determine the condition numbers of the matrices used to obtain the interpolating parabola.

For question 35 repeat the experiments using cond(A,1) and cond(A,inf).
 

Posted at 11:10 AM    

Wed - October 4, 2006

Lecture 11. 


LU Decomposition and Gaussian Elimination

Elimination Phase
I reviewed the code for the elimination phase of LU decomposition.

Backward (Forward) Substitution
I went over the code for backward substitution and mentioned that forward substitution can be coded in a similar way.

Partial Pivoting
I showed, through an illustrative example, how partial pivoting may be necessary to obtain a solution to a system of linear equations (if a solution exists). I then showed how row swaps are recorded in a permutation matrix for use with LU decomposition. 

Posted at 04:10 PM    

Mon - October 2, 2006

Lecture 10. 


Solving systems of linear equations using Gaussian elimination/L U decomposition.

It appears that everyone in the class has taken a course in linear algebra and has already seen Gaussian elimination at least once.
Through the use of an example we saw how Gaussian elimination followed by back substitution can be used to solve a system of linear equations. I also presented a slight variant of Gaussian elimination where we save the multipliers so that we can compute a lower triangular matrix L and an upper triangular matrix U such that LU = A. In this way we can use L and U to solve a system Ax = b. We also observed that having L and U at hand allows us to subsequently solve for Ax= bi for any other column vectors bi.
I began the lecture with a problem from the field of chemistry where one determines the components in an ion using mass spectrometry. I have attached a copy of the presentation.

Mass Spectrometry.pdf  

Posted at 04:58 PM    

















©