as_csmooth_em.RdCreate a coordinate-specific SmoothEM object (csmooth_em) with
diagonal covariance and a separable RW prior along the K dimension.
Each coordinate \(j\) has its own penalty strength \(\lambda_j\).
Supported covariance models (diagonal only):
"homoskedastic": \(\sigma^2_j\) shared across clusters (but varies across coordinates).
"heteroskedastic": \(\sigma^2_{j,k}\) varies across both coordinates and clusters.
as_csmooth_em(
params,
gamma = NULL,
data = NULL,
Q_K,
lambda_vec,
rw_q = 2,
ridge = 0,
modelName = c("homoskedastic", "heteroskedastic"),
relative_lambda = TRUE,
nugget = 0,
eigen_tol = NULL,
meta = NULL
)List with fields:
pi: length-K mixing proportions.
mu: list of length K; each element is a length-d mean vector.
sigma2: either
length-d numeric vector (homoskedastic), or
d-by-K numeric matrix (heteroskedastic).
(optional) n-by-K responsibility matrix.
(optional) n-by-d data matrix.
K-by-K base precision matrix along components (RW prior along K).
Should be built with lambda = 1. Coordinate penalties are in lambda_vec.
length-d nonnegative vector of per-coordinate lambdas.
RW order along K (e.g. 1 or 2). Used as rank deficiency in generalized logdet.
Ridge used in building Q_K (stored for provenance).
One of "homoskedastic" or "heteroskedastic".
Logical; if TRUE, scale the prior for coordinate j by \(1/\sigma_j^2\) (homoskedastic) or \(1/\bar\sigma_j^2\) (heteroskedastic; see details).
Nonnegative jitter added to variances after updates.
Optional tolerance for generalized logdet.
Optional list of metadata.
An object of class csmooth_em.
For modelName="heteroskedastic" and relative_lambda=TRUE, we use
\(\bar\sigma_j^2 = \sum_k \pi_k \sigma^2_{j,k}\) as a reference scale for the
prior scaling. This is a pragmatic analogue of the EEI-style scaling.