Ch10 - Curve Fitting

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 157

CURVE FITTING

Chapter 10
Curve Fitting
Introduction
What is Curve Fitting ?
General Approaches of Curve Fittings!!!
1. Least-squares Regression : derive a single curve to describe a
general trend of the data.
(data exhibit significant degree of errors)
2. Interpolation: Fitting a curve or series of curves that pass directly
through each of the points.
(Precise Data)
Curve Fitting
Introduction
-Non computer method for curve fitting, but this is dependent to
subjective viewpoint of the person sketching the curve.

Curves are used to capture Straight line segment or


General upward trend
The meanderings suggested Linear interpolation to connect
Of the data with a
By the data The point (common practice in
straight line
engineering)
Significant errors can be introduced
Curve Fitting
Application of curve fittings
1. Trend analysis
Presents the process of using the pattern of the data to make prediction.
- High precision data : interpolating polynomials
- Imprecise data : least –square regression
Curve Fitting
Application of curve fittings
2. Hypothesis testing
An existing mathematical model is compared with
measure data.
If the model coefficient are unknown it may be
necessary to determine values that best fit the
observed data.
If estimates of the model are already available it may be
appropriate to compare predicted values of the
model with observed data
Curve Fitting
Other Application of curve fittings:

• Integration
• Approximate solution of differential equations
• Derive simple functions to approximate complicated
functions
Curve Fitting
Recall: Pearson Correlation Coefficient
Linear regression
• In correlation, the two variables are treated as equals. In
regression, one variable is considered independent (=predictor)
variable (X) and the other the dependent (=outcome) variable
Y.
• The output of a regression is a function that predicts the dependent
variable based upon values of the independent variables.

• Simple regression fits a straight line to the data.


What is “Linear”?
• Remember this:
• Y= + β X?
y

β
1 slope
x
A slope of β means that every 1-unit
intercept  change in X yields a β -unit change in Y.
Prediction
If you know something about X, this knowledge helps you
predict something about Y. (Sound familiar?…sound like
conditional probabilities?)
Regression equation…

Expected value of y at a given level of x =

E ( yi / xi )     xi
Predicted value for an individual…
yi =  + *xi + random error i

Fixed – Follows a normal


exactly distribution
on the
line
Examples
Number of Friends vs Daily Minutes Online

What does best fit mean?


Best fit line
Y = 0.9039 X + 22.95
The standard error of Y given X is the average variability around the
regression line at any given value of X. It is assumed to be equal at all
values of X.

Sy/x

Sy/x
Sy/x
Sy/x
Sy/x

Sy/x
Simple linear regression

Observation: y

Dependent variable
Prediction: y^

Zero
Independent variable (x)

The function will make a prediction for each observed data point.
The observation is denoted by y and the prediction is denoted by y. ^
Regression Error
Prediction error: ε

Observation: y
Prediction: y^

Zero

For each observation, the variation can be described as:

y = y^ + ε
Actual = Explained + Error
Sum of squares of error (SSE)

Dependent variable

Independent variable (x)


A least squares regression selects the line with the lowest total sum of squared
prediction errors.
This value is called the Sum of Squares of Error, or SSE.
Sum of squares of regression (SSR)

Dependent variable
Population mean: y

Independent variable (x)

The Sum of Squares Regression (SSR) is the sum of the squared differences
between the prediction for each observation and the population mean.
SST, SSR and SSE

The Total Sum of Squares (SST) is equal to SSR + SSE.

Mathematically,

SSR = ∑ ( y^ – y ) 2 (measure of explained variation)

SSE = ∑ ( y – ^y ) 2 (measure of unexplained variation)

SST = SSR + SSE = ∑ ( y – y ) 2(measure of total variation in y)


Least square regression
yi
ŷi = bxi + a
C A

B
y
A
B y
C
yi
*Least squares estimation
gives us the parameters (a,
β) that minimizes C2 and SSE
x
n n n
Equality holds when least
(y
i 1
i  y) 2
  ( yˆ
i 1
i  y) 2
  ( yˆ
i 1
i  yi ) 2
square solution is found
A2 B2 C2
SST: Total variation in y SSR: variation explained by x SSE: unexplained variance
Total squared distance of Distance from regression line to naïve Variance around the regression line
observations from naïve mean of y
mean of y
The Coefficient of Determination (aka R-squared)

The proportion of total variation (SST) that is explained by the regression (SSR) is
known as the Coefficient of Determination, and is often referred to as R 2 .

R2=B2/A2=SSR/SST= 1 – SSE/SST

The value of R can range between 0 and 1, and the higher its value the more
accurate the regression
2 model is. It is often referred to as a percentage.
Solutions for least square fit
• In general, can be solved with optimization algorithms
• E.g. gradient descent

• In simple linear regression (single predictor), solutions can be


calculated easily
• In correlation, the two variables are treated as equals. In
regression, one variable is considered independent (=predictor)
variable (X) and the other the dependent (=outcome) variable Y.
• Also, 2
Least Square Regression
Linear Regression
• The simplest example of a least-squares approximations is fitting a
straight line to a set of paired observations: (x1, y1), (x2, y2),…,(xn, yn)
Mathematical expression of a straight line:
y=a0+a1x+e
a1- slope
a0- intercept
e- error, or residual, between the model and the observations
Least Square Regression
Linear Regression
• Fitting a straight line to a set of paired observations: (x1, y1), (x2, y2),…,
(xn, yn).
y=a0+a1x+e
a1- slope
a0- intercept
e- error, or residual, between the model and the observations
e= y-a0-a1x
Least Square Regression
The error, or residual is the discrepancy between the value of y and the
approximate value, a0+a1x, predicted by linear equation
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
Normal equation
Least-Squares Fit of a Straight Line

To find the values of constants we need to differentiate Sr with respect to each coefficient

S r
 2 ( yi  ao  a1 xi )  0
ao
S r
 2 ( yi  ao  a1 xi ) xi   0
a1
0   yi   a 0   a1 xi
0   yi xi   a 0 xi   a1 xi2
Least-Squares Fit of a Straight Line

a 0  na0 Normal equations, can be


na0   xi a1   yi solved simultaneously

n xi yi   xi  yi
a1 
n x   xi 
2 2
i Mean values
a0  y  a1 x
“Goodness” of our fit
If
• Total sum of the squares around the mean for the dependent variable, y, is St   ( yi  y )
2

• Sum of the squares of residuals around the regression line is Sr


• St-Sr quantifies the improvement or error reduction due to describing data in terms
of a straight line rather than as an average value.
• Because the magnitude of this quantity is scale-dependent, the
difference is normalized to St to yield

St  S r
r 
2 r2-coefficient of determination
St r– correlation coefficient
• For a perfect fit
Sr=0 and r=r2=1, signifying that the line explains 100
percent of the variability of the data.
• For r=r2=0, Sr=St, the fit represents no improvement.

An alternative formulation for r that is more convenient for


computer implementation is
Example
• Fit a straight line to the following values of x and y.

X y
1 0.5
2 2.5
3 2
4 4
5 3.5
6 6
7 5.5
Solution x y X2 x*y
1 0.5
n xi yi   xi  yi 1 0.5
a1  2 2.5 4 5
n x   xi 
2 2
i 3 2 9 6
4 4 16 16
a0  y  a1 x
5 3.5 25 17.5
6 6 36 36
7 5.5 49 38.5
28 24 140 119.5
4 3.428

7(119 .5)  28(24)


a1   0.8392
7(140)  (28) 2

a0  3.428  0.8392(4)  0.071428


Calculate Sr
• Calculate Sr for the previous example
X y (yi-0.0714-0.8392xi)2
1 0.5 0.168686
2 2.5 0.5625
3 2 0.347258
4 4 0.326531
5 3.5 0.589605
6 6 0.797194
7 5.5 0.199298
2.991071
Calculate St
X y (y-yavg)2
1 0.5 8.576531
2 2.5 0.862245
3 2 2.040816
4 4 0.326531
5 3.5 0.005102
6 6 6.612245
7 5.5 4.290816
Avg(y)=3.428 22.71429
Calculate r2-coefficient of determination

St  S r
r 
2

St

22.71429  2.991071
r 
2
 0.868
22.71429
Calculate the standard error of the estimate
This is the standard deviation for the regression line called
Sr “standard error of the estimate”
Sy/x 
n2 The subscript notation “y/x” designates that the error is for a predicted value
of y corresponding to a particular value of x

2.9911
Sy/x   0.7735
72
Example
use least square regression to fit a straight line to:-

X 6 15 23 30 39
Y 29 14 7 13 3
Solution

X 6 15 23 30 39 113
Y 29 14 7 13 3 66
X *Y 174 210 161 390 117 1052
X2 36 225 529 900 1521 3211

5 *1052  113 * 66
a1   0.6689
5 * 3211  113 2

a0 = (66/5)-( -0.6689)*(113/5) = 28.3171


71
72
73
74
75
Linearization of Nonlinear Relationships
Linear regression is very powerful technique for fitting a best line to the data but it is
not applicable all the time
Linearization of Nonlinear Relationships
• Transformations can be used to express the data in a form that is
compatible with the linear regression.
• Suppose the relationship between x and y is exponential model
y  a1e b1x

(this model is very common in many engineering applications to characterize quantities that increase or
decrease at a rate that is directly proportional to their own magnitude. For example, population growth )

• It can be linearized by taking the ln of both sides:


ln y  ln a1  b1 x
y  a1e b1 x

ln y  ln a1  b1 x
Linearization of
Nonlinear Relationships
• Another example of a nonlinear
model is the simple power equation
y = a2xb2
•It can be transform
•into the linear form

log y  log a2  b2 log x


Linearization of
Nonlinear Relationships
• Another example of a nonlinear model is
the saturation-growth-rate equation
x
y  a3
b3  x
It can be linearized by inverting both sides

1 1  b3  x  1 1  b3 
       1 
y a3  x  y a3  x 
1 b3 1 1
 
y a3 x a3
Example
Fit the following data with 3

a) a saturation growth model 2.5

b) power equation 2

1.5

c) a parabola 1

0.5

0
0 1 2 3 4 5 6 7 8 9

X 0.75 2 3 4 6 8 8.5
y 1.2 1.95 2 2.4 2.4 2.7 2.6
a) Saturation growth model

• After linearization

• So, we need to fit a straight line between 1/y and 1/x


a) Saturation growth model

1/x 1.333333 0.5 0.333333 0.25 0.166667 0.125 0.117647 2.82598

1/y 0.833333 0.512821 0.5 0.416667 0.416667 0.37037 0.384615 3.434473

(1/x)2 1.777778 0.25 0.111111 0.0625 0.027778 0.015625 0.013841 2.258632

(1/x)*(1/y) 1.111111 0.25641 0.166667 0.104167 0.069444 0.046296 0.045249 1.799344

n xi yi   xi  yi
a1 
n x   xi 
2 2
i

a0  y  a1 x
a) Saturation growth model
=0.369 which is the slope

a0 =3.4344/7 – 0.369*(2.825/7)=0.341 which is the intercept.


so α =2.932

So, β = 0.369 * 2.932=1.081908

1 0.369 1 1
 
y 1.0819 x 0.369
b) Power model

After linearization it becomes

So, we need to fit a straight line between


log(y) and log(x)
n xi yi   xi  yi
a1 
b) Power model n x   xi 
2 2
i

a0  y  a1 x
logx -0.12493874 0.30103 0.477121 0.60206 0.778151 0.90309 0.929419 3.865933

logy 0.079181246 0.290035 0.30103 0.380211 0.380211 0.431364 0.414973 2.277005

(logx)2 0.015609688 0.090619 0.227645 0.362476 0.605519 0.815572 0.86382 2.98126

(logx)*(logy) -0.0098928 0.087309 0.143628 0.22891 0.295862 0.38956 0.385684 1.52106

0.311422=β

0.153296

α=100.153296
c) Parabola

• So, we need to fit a straight line between y and x 2


c) Parabola
X2 0.5625 4 9 16 36 64 72.25 201.8125

Y 1.2 1.95 2 2.4 2.4 2.7 2.6 15.25

(x2)2 0.316406 16 81 256 1296 4096 5220.063 10965.38

x2 *y 0.675 7.8 18 38.4 86.4 172.8 187.85 511.925

n xi yi   xi  yi
a1 
n x   xi 
2 2
i

a0  y  a1 x
Comparison Between different Models
Example
Pollution Control: The percentage of new plant expenditures by US
public utility companies on pollution control is as shown.

Year (x) Percentage (y)


2 8.4
4 7.9
6 7.1
8 6.3
10 5.5
Example
It’s known that the data can be modeled by the
following model:

x = e(y-b)/a

• Fit the model and predict the percentage after 12


years.
• What is the value of the standard error for the
estimates.
Solution
After linearization the model becomes
lnx=(y-b)/a
Since we need to predict the value of y:
y=a*lnx+b
Solution
n (ln xi ) yi   ln( xi ) yi
a1 
n (ln xi ) 2   ln xi 
2

a0  y  a1 ln x

x y lnx (Lnx)2 ylnx (y-b-alnx)2


2 8.4 0.693147 0.480453 5.822436 0.10541
4 7.9 1.386294 1.921812 10.95173 0.155936
6 7.1 1.791759 3.210402 12.72149 0.095038
8 6.3 2.079442 4.324077 13.10048 0.000209
10 5.5 2.302585 5.301898 12.66422 0.154407
30 35.2 8.253228 15.23864 55.26035 0.511

a=-1.75945
b=9.944
y=-1.75945*lnx+ 9.944  y(12) = 5.57
Solution:
standard error for the estimates.
Sr
Sy/x 
n2

0.511
Sy/x   0.412714
3
Polynomial Regression
•We need to fit a polynomial to data using polynomial regression.
•A second-order polynomial or quadratic fit is
y = a0 + a1 x + a2 x 2 + 
•The sum of squares of the residues:

 
n
S r   yi  a0  a1 xi  a x 2 2
2 i
i 1
•Differentiate Sr with respect to all parameters:
•Set the partials to zero and arrange

•These equations are called the normal equations.


•They form a system of linear equations with 3 equations and 3 unknowns.
•In general, an mth order polynomial requires solving a system of m+1
linear equations.
Example
• Fit a second order polynomial to the following data

X 0.75 2 3 4 6 8 8.5
y 1.2 1.95 2 2.4 2.4 2.7 2.6

• Then predict the value of y at x=5


Solution
7 a0 + 32.25 a1 + 201.812 a2 = 15.25

32.25 a0 + 201.8125 a1 + 1441.547 a2 = 78.5

201.8125 a0 + 1441.547 a1 + 10965.38 a2 = 511.925

7 32.25 201.8125 a0 15.25


32.25 201.8125 1441.547 a1
a2
= 78.5
201.8125 1441.547 10965.38 511.925

a0 0.990728
a1 = 0.449901
a2 -0.03069

y = -0.0307x2 + 0.4499x + 0.9907


Solution using excel…..

2.5 f(x) = − 0.0306938043656141 x² + 0.449900617012456 x + 0.990728356411969


R² = 0.937317797416037
2

1.5 Series1
Polynomial (Series1)

0.5

0
0 1 2 3 4 5 6 7 8 9
Calculate Sr and Sy/x
 
n
S r   yi  a0  a1 xi  a x
2 2
2 i
i 1

= 0.09962
Sr 0.09962
Sy/x  
n  (m  1) 4

= 0.157813
Revision
Cramer’s Rule
“Solving system of linear equation”

Write the above equation using Matrix notation

G matrix
Revision

Cramer’s Rule
Revision

Cramer’s Rule
Revision

Cramer’s Rule
Example
A missile is traced while moving upward in a straight line and in a
constant acceleration. Different heights of that missile have been
recorded for the 10th, 20th, 30th, 40thsecond after elevation as in the
following table:

Time (s) Height (m)


10 170
20 640
30 1410
40 2480
Example
Where the height of a moving missile is given by the following
equation:

Where, H(t): is the height after t seconds from elevation; H0: is the
height of the elevation stage; V0: is the initial velocity and a: is the net
acceleration
a)From the data in the table above find H0,V0 and a by
using polynomial regression
Solution:- Time (s) Height (m)
10 170
20 640
30 1410
40 2480

H0
4 100 3000 4700
100 3000 100000
V0
= 156000
0.5*a
3000 100000 3540000 5510000

H0
-1.45519E-11
V0
0.5*a
= 2
1.5
H(t) = -1.45519E-11 + 2t + 1.5 t²
Example
b) If the stage height was 8m, find the values of V0 and a.
• Solution:
100*8 +3000*V0+100000*0.5a =156000
3000*8 + 100000*V0+3540000*0.5a= 5510000
Solution…continue
3000*V0+100000*0.5a = 155200
100000*V0+3540000*0.5a= 5486000

V0 = 1.3032258
0.5a = 1.5129032
Multiple Linear Regression
•The function y is a linear function of 2 or more independent variables,
such as

y = a0 + a1 x1 + a2 x2 + 

The regression
Line becomes
plane
Multiple Linear Regression
•The sum of the squares of the residuals

•To minimize Sr,


Multiple Linear Regression
• The normal equations are

• A system of 3 linear equations and 3 unknowns


Example : Use multiple fit regression
to fit the following data
X1 X2 Y
0 0 5
2 1 10
2.5 2 9
1 3 0
4 6 3
7 2 27

The summation required are summarized in the following table


Example
A mechanical engineering study indicates that fluid flow through a
pipe is related to pipe diameter and slope. Use multiple linear
regression to analyze the data in the following table . Then use the
resulting model to predict the flow for a pipe with a diameter of 2.5ft
and a slope of 0.025
Example
Diameter Slope flow
1 0.001 1.4
2 0.001 8.3
3 0.001 24.2
1 0.01 4.7
2 0.01 28.9
3 0.01 84
1 0.1 11.1
2 0.1 69
3 0.1 200

The equation to be evaluated is Q  a0 D a1 S a2


Solution
• After linearization the equation, the equation will be
log Q  log a0  a1 log D  a2 log S y = a0 + a1 x1 + a2 x2

 n

 log D  log S  log a0  
 
 log Q 


 log D  (log D)  (log D)(log S )  a1    (log D) * (log Q)
2

  log S  (log D )(log S )  (log S ) 2   a2    (log S ) * (log Q) 


   
Solution
 9 2.334  18.903 log a0   11.691 
 2.334 0.954  4.903   a    3.945 
  1   
 18.903  4.903 44.079   a2   22.207

After solving this system


Log a0 =1.7475
a1=2.62
a2=0.54
So a0 =101.7475
And when the diameter 2.5 and the slope 0.025
Q= 101.7475 (2.5)2.62(0.025)0.54=84.1
Example
• Given a table of data contains
X 0.1 0.2 0.4 0.6 0.9 1.3 1.5 1.7 1.8
Y 0.75 1.25 1.45 1.25 0.85 0.55 0.35 0.28 0.18

• Fit the data with a curve of the following model

x
y  xe
Solution
After linearization the model will be
ln y  ln   ln x  x
We can assume that lnx as x1 and x as x2
So a0=lnα
a1 =1
a2 =β
Solution
• After linearization the model becomes

• So, the unknown coefficients now are β and lnα


Solution
 n

 ln x  x  ln     ln y 
 ln x  (ln x)  (ln x)( x)  1    (ln x) * (ln y)
2

 x  (ln x)( x)  ( x)   a    ( x) * (ln y) 


2
 2

9 -3.65826 8.5 Ln α -4.26777


-3.65826 9.864117 1.589373 1 = -2.39997
8.5 1.589373 11.45
a2 -7.4505

9 ln   3.65826  8.5a2  4.26777


8.5 ln   1.589373  11 .45a2  7.4505
Ln α=2.2682
9 ln   8.5a2  4.26777  3.65826 A2=-2.4733
8.5 ln   11 .45a2  7.4505  1.589373
Multiple Linear Regression

124
125
126
127
128
129
130
131
132
133
Gradient Descent (Python)

def GD(W0, X, goal, learningRate):


perfGoalNotMet = true
W = W0

while perfGoalNotMet:
gradient = eval_gradient(X, W)
W_old = W
W = W – learningRate * gradient
perfGoalNotMet = sum(abs(W - W_old)) > goal
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
...
while perfGoalNotMet:

X_batch = select_random_subsample(X)
gradient = eval_gradient(@loss, X_batch, W)
...

156
157

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy