0% found this document useful (0 votes)
1 views47 pages

s m s t c Inverse Problems Lecture 4

The document outlines the schedule and content for Lecture 4 of the SMSTC Inverse Problems course, focusing on topics such as mixed-determined problems and Singular Value Decomposition (SVD). It includes details about assessments, exercises for self-work, and a revision of probability and statistics concepts related to random variables and their distributions. Additionally, it discusses the importance of understanding probability density functions (p.d.f.) and correlation in data measurements.

Uploaded by

miru park
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views47 pages

s m s t c Inverse Problems Lecture 4

The document outlines the schedule and content for Lecture 4 of the SMSTC Inverse Problems course, focusing on topics such as mixed-determined problems and Singular Value Decomposition (SVD). It includes details about assessments, exercises for self-work, and a revision of probability and statistics concepts related to random variables and their distributions. Additionally, it discusses the importance of understanding probability density functions (p.d.f.) and correlation in data measurements.

Uploaded by

miru park
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Lectures are 9:30am -11am (approximately)

We will have a small break at 10:10am until


10:20am approximately

SMSTC Inverse problems

Lecture 4
9:30am – 11am February, 4 2021
Assessments

Homeworks solutions are available for


weeks 1, 2, 3

MATLAB projects are available also


Plan for Week 4:

• Natural solution to mixed-determined problem


• Recall the separation we aimed for
• SVD: Result
• SVD Usefulness
• SVD Derivation
• SVD implementation: numerical and by-hand examples
• Revision: probability and statistics: R.V. and their realisations
Exercises for self-work for this week will be highlighted during the lecture, the
solutions will appear on the course webpage later.
Natural solution to mixed-determined problem
SVD: result
SVD: result cont
How do we know that’s what we wanted?
Natural generalized inverse
SVD: one way to derive:
cont
SVD: one way to derive (cont)
Example: solution: cont
Revision of
probability
alongside with LS
and ML results
Random Variables: realisations

random variables have systematics: tendency to take on some


values more often than others
P.D.F. The probability density function (p.d.f.) in (B)
completely describes the behavior of the random
variable. It is the idealization of a histogram of the
realizations, in the limit of an indefinitely large
number of realizations

The p.d.f. describes the probability that a realization with be


close to a given value, d
(A) (B) (C)

0.5 0.5
30
0.4 0.4

20 0.3 0.3
p(d)

p(d)
0.2 0.2
10
0.1 0.1 0.5
0 0 0
0 5 10 0 5 10 0 5 10
d d d 0.4

0.3

p(d)
0.2

0.1

0
0 2 4 6 8 10
d
probability that d is between
its minimum and maximum
bounds, dmin and dmax
C.D.F vs P.D.F.
Other ways (not pdf, not cdf) to
describe the R.V.
Two properties of a p.d.f. are the typical
value of a realization and the amount of
scatter of realizations about the typical one
• typical value. “center of the p.d.f.”
Examples: ML point, mean, mode,….
• amount of scatter around the typical value
“width of the p.d.f.”
Centrality measures
The width of a p.d.f. quantifies scatter. Small width,
QUANTIFYING WIDTH small scatter, low observational noise. Large width,
large scatter, high observational noise.
QUANTIFYING WIDTH Variance, standard deviation
estimating mean and variance from data
usual formula for “sample usual formula for square of “sample standard deviation”
mean”

Point out that at “estimate” of the mean and variance made from N realizations of a R.V. is not the same
as the true mean and true variance of the p.d.f., as determined by performing the integral. One would
hope that the estimated would become very close to the true when the number of data are made
indefinitely large.

In the literature, the word “sample” is often used instead of “estimated”. Note the lingo “standard
deviation” for the “square root of the variance”. However, “standard deviation” often is used
synonymously with “sample standard deviation”.
Matlab/Octave
Matlab: operations on distributions
PDFs

uniform

Gaussian (or Normal)

http://www.math.wm.edu/~leemis/chart/
UDR/UDR.html
Two main distributions
Realisations of R.V.
Correlation means that a relationship exists between the noise
in two different data types.
Suppose that many people measured the width and height of
an object with their own personal rulers, but that these rulers
were inaccurate; some had too large a scale, some too small.
One would then tend to find that, in a pair of measurements of
width and height performed by the same person, a
measurement of width that was unusually large would tend to
Part 2 be accompanied by measurement of height that was also
unusually large. This is correlation. Correlation is usually an
undesirable aspect of measurement.

correlated errors
We need to see whether as one variable increases, the other
increases, decreases or stays the same.
This can be done by calculating the covariance.
We look at how much each score deviates from the mean.
If both variables deviate from the mean by the same
amount, they are likely to be related.
Joint distribution
p(d1,d2) d2 p(d1)

integrate
over d2

d1 d1
integrate over
p(d2) d1

d2
joint probability density function
uncorrelated case
1

1
0.8
0.8

0.6
0.6

0.4 0.4

0.2
0.2
0

0
joint probability density function
uncorrelated case
1
0 <d2 > 10 p
0 d2 0.2
1
0.8
0.8

0.6
0.6

<d1 >
0.4 0.4

0.2

no tendency for 0.2


10 00.0
d2 to be either
d1 high or low 0

when d1 is high
Correlation
1

0
(A) <d2> (B) <d2> d2 (C) <d2>
d2 d2 0

<d1>
<d1>

<d1>
0

0
d1 d1 d1
0
formula for covariance
Formula for the covariance, in the special
case of a two-dimensional p.d.f

+ positive correlation high d1 high d2


- negative correlation high d1 low d2
joint p.d.f.
mean is a vector
covariance is a symmetric matrix

diagonal elements: variances


off-diagonal elements: covariances
For a higher dimensional p.d.f., a very powerful idea is to group
the means into a vector and the variances/covariances into a
matrix. This arrangement substantially simplifies calculations
Data contain noise and are thus random variables. Model
How are p(m) and parameters are functions of data. Thus model parameters are
random variables, too.
p(d) related?
univariate p.d.f.
use the chair rule

rule for
transforming a
univariate p.d.f.
Example 1D linear
Example 1D non-linear
multivariate p.d.f.

Jacobian
determinant

rule for
transforming a
multivariate
p.d.f.
Example: linear
Moral
p(m) can behavior quite differently from
p(d)
Exercises/Homework for week 4
Any questions about
todays material?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy