A Review Article On Mathematical Aspects Of Nonlinear Models

The main objective of this review article is to propose some mathematical aspects of nonlinear models. In mathematics, nonlinear modelling is empirical or semi-empirical modelling which takes at least some nonlinearities into account. Nonlinear modelling in practice therefore means modelling of phenomena in which independent variables affecting the system can show complex and synergetic nonlinear effects. Contrary to traditional modelling methods, such as linear regression and basic statistical methods, nonlinear modelling can be utilized efficiently in a vast number of situations where traditional modelling is impractical or impossible. This review article mainly explores on mathematical preliminaries of nonlinear models, solution of algebraic and transcendental equations and solution of system of nonlinear equations. In addition to these Taylor polynomial and finite difference operators, least - squares polynomial approximation and the roots of the equations are also discussed here.


INTRODUCTION
Mathematical models involving nonlinearity are becoming increasingly popular in recent years.In practice nonlinear mathematical models have a wide variety of applications.A mathematical model is said to be nonlinear model, if the derivatives of the model with respect to the model parameters depend on one or more parameters.This aspect is essential to distinguish nonlinear from curve linear model.Generally, the direct interpretation of the process under study can be described by the model parameters.
Several theorems in mathematical analysis are stated as preliminaries for nonlinear models.Certain numerical analysis techniques to obtain solutions of algebraic and transcendental equations have been discussed to use in iterative estimation procedures for estimating parameters of nonlinear models.

MATHEMATICAL PRELIMINARIES FOR NONLINEAR MODELS:
The following mathematical preliminaries will be useful in the present study: Intermediate Value Theorems: '' f x f 0 xf 0 f 0 ..... f 0 .... 2n Taylor's series for a function of several variables:

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS:
In research work, a frequently occurring problem is to find the roots of the equation of the form ( ) If f(x) is quadratic, cubic or a biquadratic expression then algebraic formulae are available for expressing the roots in terms of the coefficients.On the other hand when f(x) is a polynomial of higher degree or an expression involving transcendental functions, algebraic methods are not available and approximate methods are to be performed.
Here some numerical methods for the solution of (4) will be discussed where f(x) is algebraic or transcendental or a combination of both.Bisection Method:

5993
This method is based on Intermediate Value Theorem which states that if a function f(x) is continuous in [a, b] and f(a)f(b)<0 then there exists atleast one root for f(x)=0 between a and b.For definiteness let f(a) be positive and f(b) be negative.Then the root lies between a and b and let its approximate value be given by 0 ab x 2 x is a root of (3.3.1).If not then the root lies either between 0 x and b or between 0 x and a depending on the sign of ( ) x and the new interval possesses length ( )  (5) This inequation gives the number of iterations required to get an accuracy  .

The Iteration Method:
This method solves ( ) x F x = by the recursion ( ) x F x − = and converges to a root if ( ) The error nn e r x =− where r is the exact root has the property ( ) e F r e −  …………………………………………... (6)   so that each iteration reduces the error by a factor near F (r)  .If F (r)  is near 1 this is slow convergence.

Regula Falsi Method:
Let two points 01 x and x be such that ( ) The method consists in replacing the part of the curve between the points ( ) ( ) 00 x , f x and ( ) ( ) x , f x by means of the chord joining these points and taking the point of intersection of the chord with the x-axis as an approximation to the root.The point of intersection in the present case is given by putting y=0 in (3.3.4).
Hence the second approximation to the root of f(x)=0 is given by (8)  5994 If now ( ) 2 fx and ( ) 0 fx are of opposite signs then the root lies between 02 x and x and replace 12 x by x in (3.3.5) to get the next approximation.Otherwise replace 02 x by x and generate the next approximation.The procedure is repeated till the root is obtained to the desired accuracy.

Newton -Raphson Method:
Let 0 x be an approximation to the desired root of ( ) f x 0 = and let 10 x x h =+ be the correct root so that ( ) A better approximation than 0 x is therefore given by 1 x where ( ) ( ) Successive approximations are given by 2 3 n 1 x , x ,....., x + where ( ) ( ) This is known as Newton -Raphson formula.
If ( ) fx  is complicated the previous iterative method may be preferable, but Newton's method converges such more rapidly and usually gets the nod.The error n e here satisfies ( ) ( ) This is known as quadratic convergence, each error roughly proportional to the square of the previous error.The number of correct digits almost doubles with each iteration.The square root iteration is a special case of Newton's method corresponding to ( ) is a special case of Newton's method.It produces a th p root of Q.

Generalized Newton's Method:
If  is a multiple root of order p of ( ) In this method ( ) fxis approximated by a second degree curve in the nbd of a root.The roots of the quadratic are then assumed to be the approximations to the roots of the equation ( ) The method is iterative, converges almost quadratically and can be used to obtain non real complex roots Let i 2 i 1 i x , x , x −− be three different approximations to a root of f(x)=0 and let i 2 i 1 i y , y , and y be the corresponding values of y=f(x).
Let ( ) ( ) ( ) be the parabola passing through the points ( ) ( ) ( ) x , y , x , y and x , y . and From equations ( 19) and (20), one may get ( ) ( ) Solutions of equations ( 21) and ( 22) give With the above values of A and B the quadratic equation ( ) ( )

Ramanujan's Method:
Srinivasa Ramanujan described an iterative method which can be used to compute the smallest root of the equation ( ) where ( ) ( ) For smaller values of x, one can write ( ) Expanding LHS by binomial theorem for negative integer gives ( ) ( ) Comparing the coefficients of like powers of x, one can get ( ) ( ) Qz of the same degree but whose roots are the squares of the roots of the original polynomial.The process is repeated so that the roots of the new polynomial are distributed more spaciously.This is possible provided that the roots of the original polynomial are all real and distinct.The roots are finally computed directly form the coefficients.

Lin-Bairstow's method:
This is often used in estimating the factors of polynomials.Let the polynomial be given by ( ) x Rx S ++ be a quadratic factor of f(x) and let an approximate factor be 2 x rx s ++ .Usually first approximations to r and s can be obtained from the last three terms of the given polynomial.Thus where the constants 1 B , 2 B , C and D are to be determined.Equating the coefficients of like powers of x in equations (3.(37)   where the derivatives are to be calculated at r and s.Equations (3.3.34) can then be solved for r  and s  .
Using of these values in (3.3.33) will give the next approximation to R and S. The process can be repeated until successive values of R and S show no significant change.

Quotient -Difference method:
Let the given cubic equation be ( ) and let 1 x , 2 x and 3 x be its roots such that Then f(x) can be expressed as given below 5997 ( ) ( ) (39) where (40) Quotients and differences are defined by the relations Thus the quotients q approach, in the limit, the reciprocal of a root of the given cubic equation (38).
In a similar way it can be shown that Subtracting (45) from (44) and using (42) one can get Form ( 46) and (47) one can get ( ) and using (43), eq(48) gives In the similar way with (48) it can be shown that The relations (50), (51) can be easily generalized as and in the neighbourhood of the root.
Let ( ) 0, 0 xy be the initial approximation to a root ( ) ,  of the system (56).Then construct the successive approximations according to the formulae x , y is chosen in R then the sequence of approximations given by (59) converges to the roots x and y  == of the system (56).

Newton-Raphson method:
Let ( ) 00 x , y be an initial approximation to the root of the system ( ) ( ) The new approximations are then given by The process is to be repeated till one can obtain the roots to the desired accuracy.This method, if possible, possesses quadratic convergence.The following gives the conditions which are sufficient for convergence.

Theorem:
Let ( ) 00 x , y be an approximation to a root ( )

TAYLOR POLYNOMIAL AND FINITE DIFFERENCE OPERATORS:
The Taylor polynomial is the ultimate in osculation.For a single argument 0 x the values of the polynomial and its first n derivatives are required to match those of a given function ( ) ii 00 p x y x for i 0,1, 2,..., n ==

…………………………………..(70)
The Taylor formula settles the existence issue directly by exhibiting such a polynomial in the form ( ) ( ) ( ) ( ) The error of the Taylor polynomial when viewed as an approximation to y(x) can be expressed by the integral formula Lagrange's error formula may be deduced by applying a mean value theorem to the integral formula.It is and clearly resembles the error formulae of collocation ad osculation.
If the derivatives of y(x) are bounded independently of n then either error formula serves to estimate the degree n required to reduce ( ) ( ) below a prescribed tolerance over a given interval of arguments x.
Analytic functions have the property that for n tending to infinity the above error of approximation has limit zero for all arguments x in a given interval.Such functions are then represented by the Taylor series ( ) ( ) ( ) ( ) The binomial series is an especially important case of the Taylor series.For -1<x<1 one can have The Euler transformation is another useful relationship between two infinite series operators.It may be written as ( ) and so on These numbers occur in various operator equations.For example the indefinite summation operator and is related to D by

LEAST -SQUARES POLYNOMIAL APPROXIMATION:
The Least-Squares Principle: The basic idea of choosing a polynomial approximation ( ) pxto a given function ( ) yxin a way which minimizes the squares of the errors (in some sense) was developed by Gauss.There are several variations depending on the set of arguments involved and the error measure to be used.First of all when the data are discrete one may minimize the sum ( ) In order to provide a unifying treatment of the various leastsquares methods to be presented, including this first method just described a general problem of minimization in a vector space is considered.The solution is easily found by an algebraic argument using the idea of orthogonal projection.Naturally the general problem produces ( ) pxand normal equations.It will be reinterpreted to solve other variations of the least - squares principle as one proceed.In most cases a duplicate argument for the special case in hand will also be provided.
Except for very low degree polynomials the above system of normal equations proves to be illconditioned.This means that although it does define the coefficients j a uniquely in practice it may prove to be impossible to extricate these j a .Standard methods for solving linear systems may either produce no solution at all or else badly magnify data errors.As a result orthogonal polynomials are introduced.(This amounts to choosing an orthogonal basis for the abstract vector space).For the case of discrete data these are polynomials will be obtained in which binomial coefficients and factorial polynomials are prominent.
An alternate form of our leastsquares polynomial now becomes convenient namely with new coefficients k a .The equations determining these k a prove to be extremely easy to solve.In fact This formula blends together the five values yk-2, …….., yk+2 to provide a new estimate to the unknown exact value y(xk).Near the ends of a finite data supply, minor modifications are required.
The root mean square error of a set of approximations Ai to corresponding true values Ti is defined as In various test cases where the Ti are known we shall use this measure to estimate the effectiveness of leastsquares smoothing.

(ii) Approximate differentiation:
Fitting a collocation polynomial to irregular data leads to very poor estimates of derivatives.Even small errors in the data are magnified to troublesome size.But a leastsquares polynomial does not collocate.It passes between the data values and provides smoothing.This smoother function usually brings better estimates of derivatives, namely, the values of ( ) px  . The five point parabola just mentioned leads to the formula Near the ends of a finite data supply this also requires modification.This formula usually produces results much superior to those obtained by differentiating collocation polynomials.However reapplying it to the ( ) values in an effort to estimate ( ) again leads to questionable accuracy.

Continuous Data:
For continuous data y(x) one may minimize the integral.The P j(x) being Legendre polynomials.This means that one can have chosen to represent our least - squares polynomial () px from the start in terms of orthogonal polynomials in the form The coefficients prove to be For convenience in using the Legendre polynomials the interval over which the data y(x) are given in first normalized to (-1, 1).Occasionally it is more convenient to use the interval (0,1).In this case the Legendre polynomials must also be subjected to a change of argument.The new polynomials are called shifted Legendre polynomials.Some type of discretization is usually necessary when y(x) is of complicated structure.Either the integrals which give the coefficients must be computed by approximation methods or the continuous agreement set must be discretized at the outset and a sum minimized rather than an integral.Plainly there are several alternate approaches and the computer must decide which to use for a particular problem.Smoothing and approximate differentiation of the given continuous data function y(x) are again the foremost applications of least squares polynomial p(x).One can simply accept () px and ()  px as substitutes for the more irregular y(x) and ()  yx.
A generalization of the leastsquares principle involves minimizing the integral where Wk is the denominator integral in the expression for ak.This leads to the Bessel's inequality.( ) ( ) x Cos , i 0,1, 2.....n 1 2n An especially attractive property is the equalerror property which refers to the oscillation of the Chebyshev polynomials between extreme values of ±1 reaching these extremes at n+1 arguments inside the interval ( ) . As a consequence of this property the error ( ) ( ) is frequently found to oscillate between maxima and minima of approximately ±E.Such an almostequalerror is desirable since it implies that our approximation has almost uniform accuracy across the entire interval.
The powers of x may be expressed in terms of Chebyshev polynomials by simple manipulations.For example This has suggested a process known as economization of polynomials by which each power of x in a polynomial is replaced by the corresponding combination of Chebyshev polynomials.It is often found that a number of the higherdegree Chebyshev polynomials may then be dropped, the terms retained then constituting a leastsquares approximation to the original polynomial of sufficient accuracy for many purposes.The result obtained will have the almostequal-errors property.This process of economization may be used as an approxionate substitute for direct evaluation of the coefficient integrals of an approximation by Chebyshev polynomials.The unpleasant weight factor w(x) makes these integrals formidable for most y(x).
Another variation of the leastsquares principle is to minimize the sum and prove to be The approximating polynomial is then of course The underlying theme of the above discussion is to minimize the norm 2 yp − where y represents the given data and p the approximating polynomial

ROOTS OF EQUATIONS:
The problem treated here is the ancient problem of finding roots of equations or of systems of equations.The long list of available methods shows the long history of this problem and its continuing importance.Which method to use depends upon whether one needs all the roots of a particular equation or only a few whether the roots are real or complex, simple or multiple , whether one has a ready first approximation or not and so on.

(i) Interpolation Methods:
These methods use two or more approximations usually some too small and some too large to obtain improved approximations to a root by use of collocation polynomials.The most ancient of these is based on linear interpolation between two previous approximations.It is called regula falsi and solves ( ) The rate of convergence is between those of the previous two methods.A method based on quadratic interpolation between three successive approximations 0 1 2 x , x , x uses the formula S andS are the components of the gradient vector of S at ( ) o0 x , y .Thus progress is in the direction of steepest descent and the algorithm is known as the steepest descent algorithm.The number t may be chosen to minimize S in this direction though alternatives have been proposed.Similar steps then follow.The method is often used to provide initial approximations to the Newton Method.The above equivalence is of course often exploited in the opposite way.To optimize a function ( ) f x , x ,..., x one looks for places where the gradient of f is zero.

……………………………………….(121)
Here i f denotes the partial derivative of f relative to i x .The optimization is then attempted through the solution of the system of n nonlinear equations.
(iv) Bairstow's Method: This method produces complex roots of a real polynomial equation ( ) The quadratic divisor will be a factor of ( ) px if one can choose u and v so that

Conclusion
Nonlinear modelling can be utilized in situations where the phenomena are not well understood or expressed in mathematical terms.Thus nonlinear modelling can be an efficient way to model new and complex situations where relationships of different variables are not known.In the above conversation the main concepts in nonlinear modeling namely Taylor polynomial , finite difference operators, least -squares polynomial approximation and the roots of the equations are discussed.


and if f(a) and f(b) are of opposite signs then ( ) f0  = for atleast one number  such that ab   (ii) Let f(x) be continuous in [a, b] and k be any number between f(a) and f(b).Then there exists atleast one number  in (a, b) such that ( )fk  = .Rolle's Theorem:If (i) f(x) is continuous in a x b  (ii) ( ) fx exists in a < x < b and (iii) f(a) = f(b) then there exists atleast on value of x, say  such that ( ) Let f(x) be a function which is n times differentiable on [a, b].If f(x) vanishes at (n+1) distinct points 0, 1, n x x ............x in ( ) a, b then three exists a number  in ( ) a, b such that( ) atleast on value of x say  between a and b such that for a function of one variable:If f(x) is continuous and possesses continuous derivatives of order n in an interval that includes x=a then in that interval remainder term can be expressed in the form

ba 4 −
. This process is repeated until the latest interval which contains the root is as small as desired say  .At each step the length of the interval is reduced by a factor of one-half and at the end of the th n step the new interval will be  

fx and ( ) 1 fx
are of opposite signs and f is continuous on   01 x , x .Since the graph of ( )y f x =crosses the x-axis between these two points, a root must lie between these points.Now the equation of the chord joining the two points of degree n.Graeffe's method consists in transforming ( )

,
of the system (56) in the closed neighbourhood R containing ( ) ,  .If (a) f,g and all their first and second derivatives are continuous and bounded in R and (b)  of the system (56) D and  may be expressed in either of the forms  +  ………………….…………(79)both of which involve "infinite series" operator.

1 D
− is the familiar indefinite integral operator.The Euler-Maclaurin formula may be deduced from the previous relationship and is often used for the evaluation of either sums or integrals.The powers of D may be expressed in terms of the central difference operator  by using Taylor series.
data ii x , y and m<N.The condition m<N makes it unlikely that the polynomial all N data points.So S probably cannot be made zero.The idea of Gauss is to make as small as one can.Standard techniques of Calculus then lead to the normal equations which determine the coefficients aj. a ..... s a t .................................. s a s a ..... s a t This system of linear equations does determine the i a uniquely and the resulting j a do actually produce the minimum possible value of S. For the case of a linear polynomial ( )p x Mx B =+ ……………………………………………………..(89)The normal equations are easily solved and yield denominator sum in the expression for k a .ApplicationsThere are two major applications of leastsquares polynomials for discrete data i) Data Smoothing: By accepting the polynomial the given y(x), one can obtain a smooth line, parabola or other curve in place of the original, probably irregular data function.What degree () px should have depends on the circumstances.Frequently a five-point leastsquares parabola is used corresponding to points (xi, yi) with i= k-2, k-1, ……, k+2.It leads to the smoothing dx ……………………………..(101)

2 (
x) is a negative weight function.The Qk(x) are orthogonal polynomials in the generalized sense( ) ( ) ( ) 0 b jk a w x Q x Q x dx =  ……………………………………………...(105)for jk  .The details parallel those for the case w(x)=1 already mentioned, the coefficients ak being given by 6005 If the orthogonal family involved has a property known as completeness and if y(x) is sufficiently smooth then the series actually converges to the integral which appears min I .This means that the error of approximation tends to zero as the degree of ( ) pxis increased.Chebyshev Polynomials:Approximation using Chebyshev polynomials is the important special case method, the interval of integration being normalized to (-1,1).In this case the orthogonal polynomials coefficients are easily determined using a second orthogonality property of the Chebyshev polynomials ( ) ( )

2 L
This polynomial also has an almostequalerrorThe Norm: dominant root exists by computing a solution sequence of the difference equation used.If a complex conjugate pair of roots is dominant then the solution sequence is still computed but the formulas These are based upon the idea that the system F=0 or i f0 = for i=1,2…n is solved whenever the function .... f = + + + …………………………………………………….(119) is minimized since the minimum clearly occurs when all the i f are zero.Direct methods for seeking this minimum or descent methods have been developed.For example the twodimensional problem (with a familiar change of notation)

p x 0 =
by applying the Newton method to a related system.More specifically division of ( ) px by a quadratic polynomial suggests the identity

Taylor's series for a function of several variables:
. Let this new interval be   Some examples are the following