Διαφορά μεταξύ των αναθεωρήσεων του «Αριθμητική ανάλυση»

καμία σύνοψη επεξεργασίας
:Τώαρα φανταστείτε ότι τα μέρη των όρων, όπως αυτές τις λειτουργίες που χρησιμοποιούνται στο πρόγραμμαthat. Το σφάλμα θα αυξηθεί καθώς προχωράει στο πρόγραμμα, εκτός αν κάποιος χρησιμοποιεί τον κατάλληλο τύπο από τις δύο λειτουργίες κάθε φορά και ένας υπολογίζει, είτε ο ''f''(''x''), είτε ο ''g''(''x'').Η επιλογή εξαρτάται από την ισοτιμία του ''x''.
*Το παράδειγμα πάρθηκε από τον Mathew: Numerical methods using matlab, 3rd ed.
 
 
==Τομείς της μελέτης==
 
Το πεδίο της αριθμητικής ανάλυσης περιλαμβάνει πολλές υπο-ειδικότητες. Μερικές από τα σημαντικότερες από αυτές είναι:
 
===Τιμές των λειτουργιών της πληροφορικής===
 
One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the [[Horner scheme]], since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control [[round-off error]]s arising from the use of [[floating point]] arithmetic.
 
===Interpolation, extrapolation, and regression===
 
[[Interpolation]] solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points?
 
[[Extrapolation]] is very similar to interpolation, except that now we want to find the value of the unknown function at a point which is outside the given points.
 
[[Regression analysis|Regression]] is also similar, but it takes into account that the data is imprecise. Given some points, and a measurement of the value of some function at these points (with an error), we want to determine the unknown function. The [[least squares]]-method is one popular way to achieve this.
 
===Solving equations and systems of equations===
 
Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation <math>2x+5=3</math> is linear while <math>2x^2+5=3</math> is not.
 
Much effort has been put in the development of methods for solving [[systems of linear equations]]. Standard direct methods, i.e., methods that use some [[matrix decomposition]] are [[Gaussian elimination]], [[LU decomposition]], [[Cholesky decomposition]] for [[symmetric matrix|symmetric]] (or [[hermitian matrix|hermitian]]) and [[positive-definite matrix]], and [[QR decomposition]] for non-square matrices. [[Iterative method]]s such as the [[Jacobi method]], [[Gauss–Seidel method]], [[successive over-relaxation]] and [[conjugate gradient method]] are usually preferred for large systems. General iterative methods can be developed using a [[matrix splitting]].
 
[[Root-finding algorithm]]s are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is [[derivative|differentiable]] and the derivative is known, then [[Newton's method]] is a popular choice. [[Linearization]] is another technique for solving nonlinear equations.
 
===Solving eigenvalue or singular value problems===
Several important problems can be phrased in terms of [[eigenvalue decomposition]]s or [[singular value decomposition]]s. For instance, the [[image compression|spectral image compression]] algorithm<ref>[http://online.redwoods.cc.ca.us/instruct/darnold/maw/single.htm The Singular Value Decomposition and Its Applications in Image Compression]</ref> is based on the singular value decomposition. The corresponding tool in statistics is called [[principal component analysis]].
 
===Optimization===
{{Main|Mathematical optimization}}
 
Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some [[Constraint (mathematics)|constraint]]s.
 
The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, [[linear programming]] deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the [[simplex method]].
 
The method of [[Lagrange multipliers]] can be used to reduce optimization problems with constraints to unconstrained optimization problems.
 
===Evaluating integrals===
{{Main|Numerical integration}}
 
Numerical integration, in some instances also known as numerical [[quadrature (mathematics)|quadrature]], asks for the value of a definite [[integral]]. Popular methods use one of the [[Newton–Cotes formulas]] (like the midpoint rule or [[Simpson's rule]]) or [[Gaussian quadrature]]. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use [[Monte Carlo method|Monte Carlo]] or [[quasi-Monte Carlo method]]s (see [[Monte Carlo integration]]), or, in modestly large dimensions, the method of [[sparse grid]]s.
 
===Differential equations===
{{main|Numerical ordinary differential equations|Numerical partial differential equations}}
 
Numerical analysis is also concerned with computing (in an approximate way) the solution of [[differential equation]]s, both ordinary differential equations and [[partial differential equation]]s.
 
Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a [[finite element method]], a [[finite difference]] method, or (particularly in engineering) a [[finite volume method]]. The theoretical justification of these methods often involves theorems from [[functional analysis]]. This reduces the problem to the solution of an algebraic equation.
 
 
 
 
 
 
==Παραπομπές==
81

επεξεργασίες