A Mini paper reading record
Prerequisites
reading review
0.1
reviews 1 in April,2019
Introduction
1
Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
1.1
Abstract:
1.2
Introduction
1.3
Penalized Least Squares And Variable Selection
1.3.1
Smoothly Clipped Absolute Deviation Penalty
1.3.2
Performance of Thresholding Rules
1.4
Variable selection via penalized likelihood
1.4.1
Penalized Least Squares and Likelihood
1.4.2
Sampling properties and oracle properties.
1.4.3
A new Unified Algorithm
2
Random effects selection in generalized linear mixed models via shrinkage penalty function
2.1
Abstract:
2.2
Introduction
2.3
Methodology and Algorithm
2.3.1
Algorithms
3
Copula for discrete LDA
3.1
Joint Regression Analysis of Correlated Data Using Gaussian Copulas
4
Estimation of random-effects model for longitudinal data with nonignorable missingness using Gibbs sampling
4.1
Abstract:
4.2
Introduction:
4.3
Proposed model
4.3.1
Modelling time-varying coefficients
4.3.2
Bayesian estimation using Gibbs sampler
5
Sparse inverse covariance estimation with the graphical lasso
5.1
Summary
5.2
Introduction
5.3
The Proposed Model
6
A Bayesian Lasso via reversible-jump MCMC
6.1
先验:标题党:
6.2
Abstract:
6.3
Introduction
6.3.1
Sparse linear models:
6.3.2
Related work and our contribution
6.3.3
Notations
6.4
A fully Bayesian Lasso model
6.4.1
2.1. Prior specification
6.5
Bayesian computation
6.5.1
Design of model transition.
6.5.2
A usual Metropolis-Hastings update for unchanged model dimension
6.5.3
A birth-and-death strategy for changed model dimension
6.6
Simulation
7
The Bayesian Lasso
7.1
Abstract
7.2
Hierarchical Model and Gibbs Sampler
7.3
Choosing the Bayesian Lasso parameter
7.3.1
Empirical Bayes by Marginal Maximum Likelihood
7.3.2
Hyperpriors for the Lasso Parameter
7.4
Extensions
7.4.1
Bridge Regression
7.4.2
The“Huberized Lasso”
8
Bayesian lasso regression
8.1
Summary
8.2
Introduction
8.3
The Lasso posterior distribution
8.3.1
Direct characterization
8.3.2
Posterior-based estimation and prediction
8.3.3
The univariate case.
8.4
Posterior Inference via Gibbs sampling
8.4.1
The standard Gibbs sampler.
8.4.2
The orthogonalized Gibbs sampler
8.4.3
Comparing samplers
8.5
Examples
8.5.1
Example1:Prediction along the solution path
8.5.2
Example2: Prediction when modelling
\(\lambda\)
8.6
Discussion
9
Bayesian analysis of joint mean and covariance models for longitudinal data
9.1
Abstract
9.2
Introduction
9.3
Joint mean and covariance models
9.4
Bayesian analysis of JMVMs(jmcm)
9.4.1
Prior
9.4.2
Gibbs sampling and conditional distribution
10
Bayesian Joint Semiparametric Mean-Covariance Modelling for Longitudinal Data
10.1
Abstract
10.2
Introduction
10.3
Models and Bayesian Estimation Methods
10.3.1
Models
10.3.2
Smoothing Splines and Priors
10.4
MCMC Sampling
11
Bayesian Modeling of Joint Regressions for the Mean and Covariance Matrix
11.1
Abstract
11.2
Introduction
11.3
The model
11.4
Bayesian Methodology
11.5
Centering and Order Methods
12
MAXIMUM LIKELIHOOD ESTIMATION IN TRANSFORMED LINEAR REGRESSION WITH NONNORMAL ERRORS
12.1
Abstract
12.2
Introduction
12.3
Parameter estimation and inference procedure
12.3.1
MLE
13
Homogeneity tests of covariance matrices with high-dimensional longitudinal data
13.1
Introduction & Background
13.2
Basic setting
13.3
Homogeneity tests of covariance matrices
14
Dirichlet-Laplace Priors for Optimal Shrinkage
14.1
Abstract
14.2
Introduction
14.3
A NEW CLASS OF SHRINKAGE PRIORS:
14.3.1
Bayesian Sparsity Priors in Normal Means Problem
14.4
Global-Local shrinkage Rules
14.5
Dirichlet-Kernel Priors
14.6
Posterior Computation
14.7
Concentration properties of dirichlet-laplace priors
15
Parsimonious Covariance Matrix Estimation for Longitudinal Data
15.1
Abstract
15.2
Introduction
15.3
The model and prior:
15.3.1
Likelihood
15.3.2
Prior specifiction
15.4
Inference and Simulation method
15.4.1
Bayesian inference.
15.4.2
Markov Chain Monte Carlo Sampling
15.5
Simulation Analysis
15.6
Real data analysis
15.7
Conclusion
16
Modeling local dependence in latent vector autoregressive models
16.1
英文摘要简介
17
BayesVarSel: Bayesian Testing, Variable Selection and model averaging in Linear Models using R
17.1
6 Movel averaged estimations and predictions
17.1.1
6.1 Estimation
17.1.2
6.2 Prediction
18
Criteria for Bayesian Model Choice With Application to Variable Selection
18.1
Abstract:
18.2
Introduction
18.2.1
Background
18.2.2
Notation.
18.3
Criteria for objective model selection priors.
18.3.1
Basic criteria:
18.3.2
Consistency criteria
18.3.3
2.4 Predictive matching criteria.
18.3.4
Invariance criteria
18.4
Objective prior distributions for variable selection in normal linear models.
18.4.1
Introduction
18.4.2
Proposed prior (the “robust prior”)
18.4.3
Choosing the hyperparameters for
\(p_i^R(g)\)
18.5
总结
19
Objective Bayesian Methods for Model Selection: Introduction and Comparison
19.1
Bayes Factors
19.1.1
Basic Framework
19.1.2
Motivation for the Bayesian Approach to Model Selection
19.1.3
Utility Functions and Prediction
19.1.4
Motivation for Objective Bayesian Model Selection
19.1.5
Difficulties in Objective Bayesian Model Selection
19.1.6
Preview
19.2
Objective Bayesian Model Selection Methods, with Illustrations in the Linear Model
19.2.1
Conventional Prior Approach
19.2.2
2.2 Intrinsic Bayes Factor (IBF) approach
19.2.3
2.3 The Fractional Bayes Factor (FBF) Approach
20
A Review of Bayesian Variable Selection Methods: What, How and Which
20.1
Abstract
20.2
Introduction
20.2.1
Notation:
20.2.2
Concepts and Properties
20.2.3
Variable Selection Methods
21
Marginal Likelihood From the Gibbs Output
21.1
Abstract
21.2
Introduction
21.3
Notation:
21.4
Approach
22
Approximate Bayesian Inference with the Weighted Likelihood Bootstrap
22.1
Abstract
22.2
Introduction
22.3
The Method
22.4
5 Implementation and Examples
22.4.1
Iteratively Reweighted Least Squares
22.5
Using Samples From Posterior To Evaluate The Marginal Likelihood
23
Approximate Bayesian Inference with the Weighted Likelihood Bootstrap
24
Density estimation in R
24.1
Abstract
24.2
Motivation
24.3
Theoretical approaches
24.3.1
Histogram
24.4
Kernel density estimation
24.4.1
罚似然函数方法
24.5
密度估计的包
24.5.1
Histogram柱状图
24.5.2
Kernel Density estimation
24.5.3
Penalized approaches
24.5.4
Taut strings approach
24.5.5
Other packages
24.6
Density estimation computation speed
24.7
Accuracy of density estimates
24.8
Speed vs. accuracy
24.9
Conclusion
25
Integrated Nested Laplace Approximations(INLA)
25.1
Abstract
25.2
The INLA computing scheme
26
Parsimonious Covariance Matrix Estimation for Longitudinal Data
References
Published with bookdown
A paper reading record
Introduction
没什么价值,就是抄书和重新组合一下