Fri - December 1, 2006Lecture 36.The fifth and final quiz was written
today. Good luck on your exams and have a happy
holiday.
Posted at 01:13 PM Wed - November 29, 2006Lecture 35.Today I went over the solutions to
homework for week 12. They are appended
below.
Sol12.pdf Posted at 01:39 PM Mon - November 27, 2006Lecture 34.Today I went over the solutions to
homework for weeks 10 and 11. On Wednesday I will go over the solutions to the
homework for week 12. Sol10.pdf
Sol11.pdf Posted at 02:49 PM Fri - November 24, 2006Lecture 33.Today we continued with material
concerned with least squares curve fitting. In particular we saw how the method
could be used to fit polynomials of arbitrary degree, or a linear combination of
functions, as well as extensions to higher dimensions. We also saw how a
transformation of variable could be used to fit exponential curves.
On Monday I will finish up with least squares, and start the solutions to the final three weeks of homework problems. Posted at 02:50 PM Wed - November 22, 2006Lecture 32.Fitting a Line to
Data
I began with a presentation where we fit a line to data comparing lean body mass to muscle strength. We then discussed several different optimization criteria for line fitting, settling on the least squares method as the best. We derived formulae for determining the coefficients of a best least squares line, and tried it out on a simple example. LineFit.pdf Posted at 12:00 PM Mon - November 20, 2006Lecture 31.Today we looked at coupled ODEs. We also
saw that we could transform a higher order ODE into a system of coupled first
order ODEs.
I also demonstrated the DemoPredPrey m-file as provided in our text.. Posted at 05:01 PM Fri - November 17, 2006Lecture 30Runge-Kutta
Methods
I gave a general formula for the Runge-Kutta methods and we saw that all of our numerical ODE solver's, excepting the Taylor methods, follow the Runge-Kutta mold. Adaptive Methods As we saw in our study of numerical integration formula, the use of variable step sizes adaptively on an ODE problem leads gains in efficiency and accuracy. Examples of adaptive ODE solvers are Matlab's ode23 and ode45. Posted at 10:08 AM Wed - November 15, 2006Lecture 29.Continuing with our treatment of
numerical methods for solving ordinary differential equations today we saw the
so called Taylor
Methods. This collection of
methods is based on a Taylor expansion of a function, thus giving them the name.
The Taylor methods are very accurate but have the drawback that derivatives need
to be computed.
We then saw a way to approximate a Taylor method. The formula that was derived today goes by the name Heun's Method. This belongs to a larger class of formulae known as Runge-Kutta methods. Posted at 04:19 PM Mon - November 13, 2006Lecture 28Today we began exploring numerical
methods for solving ordinary
differential equations, (ODE). I
started with a short presentation about radioactive decay and how an ODE could
be used to model the reduction of mass as a result of decay over a period of
time. Given some starting mass, an initial condition, we saw an analytic
solution to this initial value
problem (IVP). I then solved the
IVP using a Matlab numerical ODE
solver.
I gave an overview of Euler's method for solving an IVP for an ODE. We saw the derivation of the method using a Taylor expansion. We also saw a visualization of Euler's method, sometimes called the tangent method. ode2006.pdf Posted at 08:21 AM Fri - November 10, 2006Wed - November 8, 2006Lecture 26.I reviewed the solutions to homework for
weeks 8 and 9. They are attached below.
Sol8.pdf Sol9WREC22.pdf Posted at 03:19 PM Mon - November 6, 2006Lecture 25.Gaussian
Quadrature
Today we looked at Gaussian quadrature. I went over the two node case. I showed how solving a system of 4 non-linear equations one could obtain c1,c2, x1, x2, such that the value of the integral in an interval -1..1, could be obtained by the weighted sum c1*f(x1) + c2*f(x2). The value of the integral is exact for polynomials up to degree 3 and an approximation for arbitrary functions. I then showed how one can use a change of variable so that Gaussian quadrature can be applied to integrals over any interval a..b. Posted at 07:18 AM Fri - November 3, 2006Lecture 24.Adaptive
quadrature is a technique that
automatically determines how many panels to use (and where to put them) so that
a numerical integral is computed within the prescribed accuracy. We saw how the
truncation error could be estimated, and how this estimate is used in a
recursive adaptive trapezoid rule as well as a recursive adaptive Simpson's rule
algorithm.
Posted at 03:48 PM Wed - November 1, 2006Lecture 23.Today we looked at errors of integration
for the Newton Cotes rules. We saw why the midpoint rule is as accurate as the
trapezoid rule, and Simpson's rule is as accurate as a rule using a cubic
interpolating polynomial. We also looked at composite rules, in particular we
explored using a composite Simpson's rule on multiple panels.
If you are trying to follow the algebra in Recktenwald consult the errata page. In particular there are multiple errors in equation (11.9) on page 613 that may trip you up. On Friday we will look at adaptive algorithms for solving integrals. Posted at 12:05 PM Mon - October 30, 2006Lecture 22.Numerical
Quadrature.
Today we began our exploration of numerical integration, also known as numerical quadrature. The trapezoid and Simpson's methods are examples of a collection of rules known as Newton-Cotes formulas. The lecture began with a presentation to motivate the subject of numerical quadrature. The presentation is attached in a 2 slide per page PDF file. NumericalIntegrationMotivation.pdf Posted at 11:47 AM |
Quick Links
Calendar
Categories
Archives
XML/RSS Feed
Statistics
Total entries in this blog:
Total entries in this category: Published On: Dec 01, 2006 01:13 PM |