<<< Previous 10 Lectures <<< >>> Next 10 Lectures >>>

EE 503 Lectures (Fall 2020/21)

Lec. #11

00:00 - "similarity" of two r.v.'s
00:45 - Inner product as an indicator of "similarity" for N-dim vectors (reminder)
04:14 - Correlation coefficient (definition for zero mean r.v.'s)
08:28 - Properties of correlation coefficient
09:31 - |r_{xy}| \leq 1 (proof)
13:35 - |r_{xy}| = 1 implies Y = aX
16:41 - Y = aX imples |r_{xy}| = 1
20:03 - r_{xy} as angle between two r.v.'s
22:01 - Orthgonal r.v.'s (definition)
23:05 - Example: r_{xy} calculation for Y = aX + N
29:52 - SNR definition (example continues)
32:55 - Interpretation of results (example continues)
37:40 - Correlation (definition)
38:22 - Correlation coefficient (definition)
42:25 - Cov(X,Y) (definition)
45:20 - Uncorrelated r.v.'s
47:35 - Example: Cov(X,Y) for r.v.'s X and Y which are indicator functions events A and B
55:03 - Interpretation of results (example continues)
57:51 - Properties of Cov(X,Y)
1:00:58 - Example: Var ( \sum_{i=1}^N x_i )
1:06:20 - Independence implies uncorrelatedness (proof)

Lec. #12a

A Matlab illustration on the correlation coefficient

.pdf document

Lec. #12b

00:00 - Random vectors
03:04 - Correlation matrix (definition)
03:31 - Covariance matrix (definition)
11:10 - Properties of covariance matrix
12:14 - Hermitian symmetry (properties continue)
15:18 - Positive semi-definiteness (properties continue)
20:47 - Gaussian Distribution
21:16 - 1D Gaussian r.v.
28:10 - N-dimensional Gaussian vectors
33:08 - 2-dimensional Gaussian vectors
40:15 - Level curves (2D Gaussian vectors)
45:15 - Level curves (2D Gaussian vector, Cx : diagonal)
49:10 - Level curves (2D Gaussian vector, Cx \propto I )
50:30 - Facts on Gaussian vectors
50:45 - Marginalization (facts continue)
52:48 - Example on marginalization of Gaussian vectors
55:30 - Linear processing of Gaussian vectors (facts continue)
56:15 - Example: Ry matrix in terms of Rx for y = Mx
1:02:13 - Example: Var ( \sum_{i=1}^N x_i ) (redo earlier example with vector operations)


Corrections:
36:20 - 2D Gaussian case: pdf (2nd line on the left side) should have Cx^{-1} not Cx (Cagatay C.)
Lec. #13

0:00 - Linear processing of random vectors (summary)
3:40 - Decorrelation of random vectors
5:45 - Case 1: Diagonalization by eigendecomposition
19:15 - Case 2: Diagonalization by unitary transformation and scaling
23:35 - Case 3: Diagonalization by LU decomposition
29:29 - Unit lower triangular matrix (definition)
36:27 - Causal decorrelation operation by LU decomposition

Lec. #14

0:00 - Diagonalization of Covariance Matrices (review)
3:50 - Diagonalization by eigendecomposition (review)
06:01 - Diagonalization by unitary Transformation and followed by scaling (review)
08:35 - Diagonalization by LU decomposition (review)
11:40 - Joint Diagonalization of two covariance matrices
15:13 - Joint Diagonalization of two covariance matrices (Step 1)
18:14 - Joint Diagonalization of two covariance matrices (Step 2)
26:35 - Joint Diagonalization of two covariance matrices (with a single step)
40:50 - Random processes

Lec. #15

0:00 - Descriptions for Random Processes
0:35 - Joint pdf description
6:42 - Example: joint pdf description of x(t) = A cos(2\pi f t + \theta), \theta: r.v. ~ Unif. [0,2\pi)
12:10 - 1st order pdf description (example)
13:22 - One function of one r.v. discussion
26:47 - 1st order pdf description (example, continued)
37:12 - 1st order pdf description (comments)
41:51 - 2nd order pdf description (example)
58:51 - 3rd order pdf description (example)

Lec. #16

0:00 - Random Process Descriptions (cont'd)
0:58 - Gaussian Processes
2:20 - Joint pdf of samples (Gaussian processes)
6:00 - Example: Marginal pdf's are Gaussian, while joint pdf is not
14:50 - Example: x(t) = a t+ b (a,b: r.v.'s)
19:13 - 1st order pdf description (example)
27:05 - 2nd order pdf description (example)
37:40 - 3rd order pdf description (example)
42:27 - Discussion of degenerate cov. mat. (example)

Lec. #17

0:15 - Moments Description
2:35 - 1st Order Moment Description: mean function
3:19 - 2nd Order Moment Description: Autocorrelation/Covariance functions
5:08 - Comments: on the Moments Descriptions
9:18 - Example: x(t) a r.p. with mu_x(t)=3 and R_x(t1,t2)=9+4*exp(-0.2|t1-t2|), z=x(5), w=x(8), Find E{z}, E{w}, E{z^2}, E{w^2} and E{zw}
14:50 - Example: z = x(t1) + x(t2), Find E{z^2}
17:12 - Example: s = integral_{from a to b}(x(t)dt), a) Find E{s}, b) Find E{s^2}
21:33 - Example: x(t)=Acos(wt+Q), A,Q are independent r.v.'s, Q~uniform[0,2*pi), Find mu_x(t) and R_x(t1,t2)
34:14 - Notes: about complex valued processes
37:13 - Example: x(t)=A*exp(wt+Q), A,Q are independent r.v.'s, Q~uniform[0,2*pi), Find mu_x(t) and R_x(t1,t2)

Lec. #18

0:00 - White noise process
8:10 - Example: x_1[k] = w and x_2[k] = w_k
14:17 - Joint pdf description (example)
21:15 - Moment description (example)
30:05 - Example: w_1[n] \in {1,-1} iid; w_2[n] ~ N(0,1), iid
33:50 - Joint pdf description (example)
38:30 - Moment description (example)
space

Lec. #19

0:00 - Linear systems with stochastic inputs
0:33 - Linear systems - continuous time (review)
2:16 - LTI systems - continuous time (review)
5:06 - Linear systems - discrete time (review)
7:40 - LTI systems - discrete time (review)
8:58 - LTI systems - discrete time, convolution matrix (review)
11:11 - Moment descriptions for linear systems with stochastic inputs
11:30 - Linear processing output mean function calculation
14:02 - Basic Assumption: Commutation of Expectation operation and Linear operations
16:29 - Linear processing output auto-correlation function calculation
17:01 - Linear processing output auto-correlation function calculation (step 1: cross correlation)
25:10 - Linear processing output auto-correlation function calculation (step 2: auto-correlation)
32:04 - Example: y(t) = L{x(t)} = d/dt x(t), Find output mean and autocorrelation functions
41:00 - Connections with earlier finite dimensional results
47:12 - Brief discussion on LTI and WSS input case
53:15 - Stationary Random Processes
58:03 - Stationarity in joint pdf description
1:02:07 - Stationarity in moment description

Lec. #20

0:00 - Stationarity in pdf/moment descriptions (review)
10:40 - SSS implies WSS
11:48 - Gaussian process and WSS is equivalent to SSS
13:41 - Example: x(t) = a cos(wt) + b sin(wt), find conditions on a,b r.v.'s for x(t) be WSS
14:30 - Stationarity in the mean (example, cont'd)
19:51 - Stationarity in the autocorrelation (example, cont'd)
32:38 - Example: x[n] r.p with inpendent sample with x[2n] ~ Unif(-\sqrt(3),\sqrt(3) and x[2n+1] ~ N(0,1). Is x[n] WSS/SSS?
42:13 - Jointly WSS random processes
45:10 - LTI processing of WSS processes
47:05 - Mean function calculation (LTI processing of WSS processes)
space

Corrections:
39:38- left board, very bottom line should be \delta[k] instead of \delta[n-k] (Thanks to Gulin T.)

<<< Previous 10 Lectures <<< >>> Next 10 Lectures >>>

 

 

 

 

 

 

EE 503 Statistical Signal Processing and Modeling
(Fall 2019– 2020)

Short Description:

This course is the first course on statistical signal processing in the graduate curriculum of Department of Electrical and Electronics Engineering, Middle East Technical University (METU). Topics covered in this course are random vectors, random processes, stationary random processes, wide sense stationary processes and their processing with LTI systems with applications in optimal filtering, smoothing and prediction. A major goal is to introduce the concept of mean square error (MSE) optimal processing of random signals by LTI systems.

For the processing of the random signals, it is assumed that some statistical information about the signal of interest and distortion is known. By utilizing this information, MSE optimal LTI filters (Wiener filters) are designed. This forms the processing part of the course. The estimation of the statistical information to construct Wiener filters forms the modeling part of the course. In the modeling part, we examine AR, MA, ARMA models for random signals and give a brief discussion of Pade, Prony methods for the deterministic modeling. Among other topics of importance are decorrelating transforms (whitening), spectral factorization, Karhunen-Loeve transform

This course is a natural pre-requisite (not a formal one) to EE5506 Advanced Statistical Signal Processing. The estimation theory topics in EE 503 is mostly limited to the moment description of random processes which forms a special, but the most important, case of EE 5506.

Outline of Topics:

  1. Review
    1. Basics of Mathematical Deduction
      1. Necessary, Sufficient Conditions
      2. Proofs via contradiction, contraposition
    2. Basics of Linear Algebra
      1. Linear independence of vectors (points in linear space)
      2. Range and Null space of the combination process
      3. Projection to Range/Null Space (orthogonality principle)
      4. Positive Definite Matrices
    3. Basics of Probability
      1. Probability as a mapping, axioms, conditional probability
      2. Expectation, law of large numbers
      3. Moments, moment generating function

  2. Random Processes
    1. Random variables, random vectors (or a sequence of random variables), moment descriptions (mean, variance, correlation), decorrelating transforms
    2. Random processes, stationarity, wide Sense Stationarity (WSS), power spectral density, spectral factorization, linear time invariant processing of WSS random processes, ergodicity

    Ref: Therrien, Hayes, Papoulis, Ross
     
  3. Signal Modeling
    1. LS methods, Pade, Prony (Deterministic methods)
    2. AR, MA, ARMA Processes (Stochastic approach), Yule-Walker Equations, Non-linear set of equations for MA system fit
    3. Harmonic Processes

    Ref: Hayes, Papoulis
     
  4. Estimation Theory Topics
    1. Random parameter estimation
      1. Cost function, loss function, square error, absolute error
      2. Conditional mean (regression line) as the minimum mean square error (MSE) estimator, orthogonality properties
      3. Linear minimum mean square error (LMMSE) estimators, orthogonality principle
      4. Regression line, orthogonality
      5. FIR, IIR, Causal–IIR Wiener filters
      6. Linear Prediction, backward prediction
      7. Random vector LMMSE estimation (multiple parameter)
    2. Non-random parameter estimation
      1. Maximum likelihood method
      2. Best Linear Unbiased Estimator (BLUE)
      3. Discussion of linear estimators for the linear observation model y=Ax+n
    3. Karhunen – Loeve Transform

    Ref: Therrien, Hayes
     
References:

[Hayes]: M. H. Hayes, Statistical Signal Processing and Modeling, Wiley, New York, NY, 1996.

[Therrien]: C. W. Therrien, Discrete random signals and statistical signal processing, Prentice Hall, c1992.

[Papoulis]: A. Papoulis, Probability, Random Variables, and Stochastic Processes, 3rd edition, McGraw Hill, 1991.

[Ross]: S. M. Ross, Introduction to probability models, 7th ed. Harcourt Academic Press, 2000.