Appendix
A part of QCP of student 1 in Statistical Simulation course (bold
items are my response to the student):
Statistical use of simulation:
- To tract the distribution of statistic, such as sample mean is
distributed as normal with mean mu and variance sigma^2/n. This may
not
be the best example. Usually, you know this by theory (e.g., by CLT). In
these cases, you dont need simulation.
- Parameter estimation and comparison: In order to decide which
estimator is better, mean square error (or variance, if estimator is
unbiased) is calculated.
Note: In the previous comment paper, at the graph that I draw to
investigate randomness, you asked what I was supposed to see: There should
be no pattern to say that data is random. You suggested drawing lines. In
the homework, I see that lines are useful to see the randomness. Good!
A part of QCP of student 2 in Statistical Simulation course:
Last lecture we have firstly started importance sampling. In this
method
we are overcoming the problems while estimating = E [g (x)]. These
problems are having large variance of g(x) and / or difficulties
in
simulating random vector X. Here h(x), importance function, is employed.
This function should be chosen in a way that simulation of r.v. from h(x)
should be easy and also that would give smaller variance.
In jackknife method, are there any conditions about the sample
observations like the ones in bootstrap such as independence or
correlation between x variables? You have stated that bootstrap is a
special case of jackknife method, so the same conditions for x
observations are also valid for jackknife method? This should be
correct.
My intuition is that jackknife will fail in similar situations when
bootstrap fails. The only written material I have noticed about when
jackknife might fail and bootstrap will work better is when the statistic
is not smooth, i.e. when small changes in data will produce large changes
in the statistic (Martinez, W.L. and Martinez A.R. (2002). Computational
Statistics Handbook with MATLAB, pg. 245-247.)