Notes in statistics and computing
preliminary
1
杂七杂八{# extra}
2
Introduction
3
Mathematical statistic Trick
3.1
Normal distribution as exponential family
3.2
密度变换公式:
3.3
Probability mass function:
3.4
Vector to diagonal matrix:
3.5
Gaussian Integral Trick
3.6
Asymptotic issues
3.7
Matrix inverse
4
General Notes
4.1
台湾教授彭明辉教授的研究生手册
4.1.1
读论文的要求
4.2
论文报告的要求与技巧
5
Statistician Tool Box
5.1
Matrix algebra
5.1.1
Block diagonal matrices
5.2
两个二次型相加
5.3
正定阵的基础定理:
5.4
对称阵的谱分解相关定理
5.5
Covariance Structure
5.5.1
Compound Symmetry Covariance结构
5.5.2
Huynh-Feldt Structure
5.5.3
The One-Dependent Covariance Structure
5.5.4
AR(1) Structure (highly popular)
6
Longitudinal data analysis
6.1
Linear mixed model
6.1.1
Condition Mean vs Marginal mean
6.1.2
Restricted maximum likelihood estimation
6.1.3
Prediction
6.2
Generalised linear mixed models
6.2.1
Exponential distribution family
6.2.2
Iteratively reweighted Least square algorithm (IWLS)
6.2.3
GLMMs
6.3
The Bayesian analysis approach for covariance modelling
7
统计图形笔记
7.1
图例:
7.2
多图参数
8
BasicMCMC
8.1
Metropolis-Hastings Update
8.1.1
Metropolis Update
8.2
The Gibbs Update
8.2.1
Variable-at-a-Time Metropolis-Hastings
8.2.2
The Gibbs is a special case of Metropolis-Hastings:
8.2.3
Gibbs Full conditional distibution
8.3
Combining Updates
8.3.1
Composition
8.3.2
Palindromic Composition
8.3.3
State-Independent Mixing
8.3.4
Subsampling
8.4
A Metropolis Example
9
Reversible Jump MCMC
9.1
Introduction
9.1.1
From Metropolis-Hastings to Reversible Jump
9.1.2
Application Area
9.2
Implementation
9.2.1
Example Dimension Matching
9.2.2
Example: Moment Matching in a Finite Mixture of Univariate Normals
9.3
Mapping Functions and Proposal Distribution
9.3.1
Marginalization and augmentation:
9.3.2
Centering and Order Methods
10
Optimal Proposal Distributions and Adaptive MCMC
10.1
Intro
10.1.1
MH algorithm
10.1.2
Optimal Scaling
10.1.3
Adaptive MCMC
10.1.4
Comparing Markov Chains
10.2
Optimal Scaling of Random-Walk Metropolis
10.2.1
Basic principle
10.2.2
Optimal Acceptance Rate as
\(d\rightarrow \infty\)
10.2.3
Inhomogeneous Target Distributions
10.2.4
Metropolis-Adjusted Langevin Algorithm.
10.2.5
Numerical Examples
10.2.6
Inhomogeneous Covariance
10.3
Adaptive MCMC
10.4
Ergodicity of Adaptive MCMC
10.4.1
Adaptive Metropolis
10.4.2
Adaptive Metropolis-within-Gibbs
10.4.3
State-Dependent Proposal Scalings
10.4.4
Limit Theorem
10.5
FAQ
10.6
Conclusion
10.7
A tutorial on adaptive MCMC
11
Hamiltonian Monte Carlo
11.0.1
Properties of Hamiltonian Dynamics
11.0.2
Conservation of the Hamiltonian
11.0.3
Volume preservation
11.0.4
Symplecticness (辛?)
11.1
Discretizing Hamilton’s Equations: The leapfrog method.
11.1.1
Modification of Euler’s Method
11.1.2
The leapfrog Method
11.1.3
Local and Global Error of discretization Methods.
11.2
MCMC from Hamiltonian Dynamics.
11.2.1
Probability and the Hamiltonian: Canonical Distributions
11.2.2
The Hamiltonian Monte Carlo Algorithm
11.2.3
Illustrations of HMC and Its Benefits
11.2.4
The benefit of avoiding random walks
11.2.5
Sampling from a 100-Dimensional Distribution
11.3
HMC in Practice and Theory
11.3.1
Effect of Linear Transformation
11.3.2
Tuning HMC
11.3.3
Combining HMC with Other MCMC Updates
11.3.4
Scaling with Dimensionality
11.3.5
HMC for Hierarchical Models
11.4
Extensions of and Variations on HMC
11.4.1
Discretization by Splitting: Handling constraints and Other Applications
12
Bayes variable selection
12.1
Prior Specification
12.2
Summaries the posterior distribution and model averaged inference
12.3
Numerical Methods
12.3.1
Empirical Bayes by Marginal Maximum Likelihood
12.4
Bayesian asymptotically analysis
12.5
Bayes factor
12.5.1
Marginal density 居然可以这么来使
12.5.2
几个需要可能研究的玩意
13
Advanced R
13.0.1
Vector
13.0.2
Types and tests:
13.0.3
Coercion
13.1
Data.frame
13.1.1
Ordering (integer subsetting)
13.1.2
Calling a function given a list of arguments
14
Numeric Derivatives
15
Imputation
16
Simulation approach for computing the marginal likelihood
16.1
Laplace-Metropolis approximation
16.2
Laplace-Metropolis approximation
16.3
Chib’s estimator from Gibbs’s sampling
16.4
Example: Seemingly unrelated regression model with informative prior.
16.5
Bridge sampling methods
16.6
The savage-Dickey density ratio approach
17
R web scrape
18
Guide to Scientific Computing in C++
18.1
Basics
18.2
Basics in C++
18.3
Redirect Console Output to File
18.3.1
Reading from the Command Line
18.4
Pointer
18.5
Functions
18.5.1
Use of Pointers as function arguments.
18.6
Classess
18.6.1
Header Files
18.7
Using Makefiles to Compile Multiple Files
18.8
类的继承
18.8.1
继承类的实时多态 Run-Time Polymorphism
18.9
模板
18.9.1
Brief Survey of the Standard Template Library
18.10
Class for linear algebra
19
Rcpp
19.1
一个R操作对应的armadillo操作的文档:
19.2
Rcpp package
20
Statistical Computing
20.1
Generate Multivariate Normal samples
21
R trick
22
Statistic term
23
Proof and Calculation
23.0.1
Coordinate Descent Algorithm for Lasso
References
Published with bookdown
Notes on Statistics
Chapter 2
Introduction
This is a notes about the book reading and useful tricks while reading books and codes.