Last updated: 2025-08-29
Checks: 7 0
Knit directory: Dendrogram_project_in_R/
This reproducible R Markdown analysis was created with workflowr (version 1.7.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20250829)
was run prior to running
the code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version 201a83b. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish
or
wflow_git_commit
). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
Ignored files:
Ignored: .DS_Store
Untracked files:
Untracked: code/dendrogram.R
Untracked: comparison_weak_diff_nonseparate_n_500.rds
Untracked: data/comparison_weak_diff_nonseparate_n_500.rds
Untracked: data/comparison_weak_diff_nonseparate_n_5000.rds
Untracked: data/comparison_weak_diff_nonseparate_n_50000.rds
Untracked: data/comparison_weak_equal_separate_n_1000.rds
Untracked: data/comparison_weak_equal_separate_n_500.rds
Untracked: data/comparison_weak_equal_separate_n_5000.rds
Untracked: data/comparison_weak_equal_separate_n_50000.rds
Unstaged changes:
Modified: analysis/_site.yml
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown
(analysis/time_running_mclust_compare.Rmd
) and HTML
(docs/time_running_mclust_compare.html
) files. If you’ve
configured a remote Git repository (see ?wflow_git_remote
),
click on the hyperlinks in the table below to view the files as they
were in that past version.
File | Version | Author | Date | Message |
---|---|---|---|---|
Rmd | 201a83b | Ldo3 | 2025-08-29 | wflow_publish(c("index.Rmd", "time_running_mclust_compare.Rmd")) |
In this experiment, we compare the running time of the Dendrogram
post-processing method (applied to overfitted mixture models) to the
traditional approach of fitting mixture models with various numbers of
components using the Mclust
package in the programming
language R
. The data was generated under the two scenarios
discussed in the previous section, where we have weakly identifiable and
equal-size, equally spaced clusters and weakly identifiable and
different-size, overlapping clusters settings. In the traditional
approach, we ran Mclust with the number of components varying from \(1\) to \(10\), allowing for a full covariance matrix
(modelNames = "VVV"
), and chose the model with the lowest
BIC score. In our proposed approach, we ran Mclust once, with the number
of components being \(10\), and then
used the Dendrogram algorithm afterward. Given the recorded output in
each level of the Dendrogram, we calculated the associated BIC score. We
then chose the model that has the smallest BIC value. The following two
tables show the result after running 20 replication times for each of
the same sizes of \(n = 500, 5000,
50000\). It is clear to see that the Dendrogram approach saves
computation time since we only need to fit the model once. We also
achieved a good model compared to the MLE output by looking at the
Wasserstein distance and BIC score report.
Note: This site contains all code that we used for
the experiment. It can be used for reproducing the results. We saved
output in .rds
files in the data
folder.
# Generate samples from a Gaussian mixture
gen_mix_Gauss <- function(n, p, mus_true, Sigmas_true) {
k0 <- length(p)
d <- length(mus_true[[1]])
y <- sample(1:k0, size = n, replace = TRUE, prob = p) # cluster labels
X <- matrix(0, nrow = n, ncol = d)
for (i in 1:k0) {
idx <- which(y == i)
if (length(idx) > 0) {
X[idx, ] <- rmvnorm(length(idx), mean = mus_true[[i]], sigma = Sigmas_true[[i]])
}
}
return(list(X = X, y = y))
}
# Log-likelihood for mixture
average_llh <- function(X, p, mu, Sigma) {
n <- nrow(X)
k <- length(p)
logpdfs <- sapply(1:k, function(i) {
dmvnorm(X, mean = mu[[i]], sigma = Sigma[[i]], log = TRUE)
})
llh_bar <- mean(matrixStats::rowLogSumExps(log(p) + logpdfs))
return(llh_bar)
}
library(mvtnorm)
library(mvtnorm)
gmm_loglik <- function(X, pis, mus, Sigmas) {
n <- nrow(X)
k <- length(pis)
# Matrix: n x k of component densities
dens <- sapply(1:k, function(j) {
dmvnorm(X, mean = mus[j, ], sigma = Sigmas[[j]], log = FALSE)
})
# Mixture likelihood per point
mixture_prob <- dens %*% pis
# Total log-likelihood
sum(log(mixture_prob + 1e-12))
}
omega <- function(n) log(n)
# Class: Fixed covariance Gaussian mixture
FixedCovMixture <- setRefClass(
"FixedCovMixture",
fields = list(
n_components = "numeric",
cov = "matrix",
max_iter = "numeric",
random_state = "numeric",
tol = "numeric",
p_ = "numeric",
mean_ = "matrix",
n_iter_ = "numeric"
),
methods = list(
initialize = function(n_components, cov, max_iter = 100, random_state = NULL, tol = 1e-4) {
n_components <<- n_components
cov <<- cov
max_iter <<- max_iter
random_state <<- random_state
tol <<- tol
p_ <<- rep(1 / n_components, n_components)
},
fit = function(X) {
if (!is.null(random_state)) set.seed(random_state)
n_obs <- nrow(X)
n_features <- ncol(X)
# initialize centers
mean_ <<- X[sample(1:n_obs, size = n_components), , drop = FALSE]
for (i in 1:max_iter) {
res <- updated_centers(X)
new_p <- res$p
new_centers <- res$centers
if (sum(abs(new_centers - mean_)) < tol) {
break
} else {
mean_ <<- new_centers
p_ <<- new_p
}
}
n_iter_ <<- i
},
updated_centers = function(X) {
cluster_posterior <- predict_proba(X)
weight <- cluster_posterior / rowSums(cluster_posterior)
new_p <- colSums(cluster_posterior) / sum(cluster_posterior)
new_centers <- t(weight) %*% X
return(list(p = new_p, centers = new_centers))
},
predict_proba = function(X) {
likelihood <- sapply(1:n_components, function(i) {
dmvnorm(X, mean = mean_[i, ], sigma = cov)
})
weights <- p_ * t(likelihood)
cluster_posterior <- t(weights) / colSums(weights)
return(cluster_posterior)
},
predict = function(X) {
post <- predict_proba(X)
return(max.col(post, ties.method = "first"))
}
)
)
fast_dmvnorm <- function(X, mean, sigma) {
d <- ncol(X)
n <- nrow(X)
# Cholesky decomposition
cholS <- chol(sigma)
logdet <- 2 * sum(log(diag(cholS)))
# Centered data
Xc <- sweep(X, 2, mean, "-")
# Solve using triangular system (avoids matrix inverse!)
Q <- forwardsolve(t(cholS), t(Xc)) # d x n
quad <- colSums(Q^2) # quadratic form
const <- -0.5 * d * log(2 * pi)
dens <- exp(const - 0.5 * logdet - 0.5 * quad)
return(dens)
}
gmm_loglik_fast <- function(X, pis, mus, Sigmas) {
n <- nrow(X)
k <- length(pis)
# Compute n x k density matrix
dens <- sapply(1:k, function(j) {
fast_dmvnorm(X, mean = mus[j, ], sigma = Sigmas[[j]])
})
# Weighted mixture likelihood
mixture_prob <- dens %*% pis
# Total log-likelihood
sum(log(mixture_prob + 1e-12))
}
library(transport)
# Compute Wasserstein distance between two Gaussian mixtures (ignoring covariance)
wasserstein_mixture <- function(p_true, mus_true, p_hat, mus_hat, p = 2) {
# p_true: vector of mixing proportions (sum to 1)
# mus_true: matrix of means (rows = components, cols = dimensions)
# p_hat: inferred mixing proportions (sum to 1)
# mus_hat: inferred means (rows = components, cols = dimensions)
# p: Wasserstein order (default = 2)
# Make sure weights sum to 1
p_true <- p_true / sum(p_true)
p_hat <- p_hat / sum(p_hat)
# Compute cost matrix (pairwise L^p distances between means)
cost <- as.matrix(dist(rbind(mus_true, mus_hat), method = "euclidean"))^p
cost <- cost[1:nrow(mus_true), (nrow(mus_true)+1):(nrow(mus_true)+nrow(mus_hat))]
# Solve optimal transport problem
ot_plan <- transport(p_true, p_hat, costm = cost, method = "networkflow")
# assuming ot_plan is your dataframe
nrow <- max(ot_plan$from)
ncol <- max(ot_plan$to)
mat <- matrix(0, nrow = nrow, ncol = ncol)
for (i in 1:nrow(ot_plan)) {
mat[ot_plan$from[i], ot_plan$to[i]] <- ot_plan$mass[i]
}
mat
# Wasserstein distance
Wp <- (sum(mat * cost))^(1/p)
return(Wp)
}
True model has 4 components in 6 dimensional space
set.seed(0)
# 4 components, equal weights
ps_true <- rep(1/4, 4)
ps.true = ps_true
# Means: 4 vectors in 6D
mus_true <- list()
for (i in 1:4) {
mu <- rep(0, 6)
mu[i] <- 3
mus_true[[i]] <- mu
}
mus.true = do.call(rbind, mus_true)
# Identity covariance for each component
Sigmas_true <- replicate(4, diag(6), simplify = FALSE)
# Sample n=500 points
n <- 500
gen <- gen_mix_Gauss(n, ps_true, mus_true, Sigmas_true)
X <- gen$X # data matrix (2000 × 6)
y <- gen$y # cluster labels (1–4)
W1_rep = c()
BICs = c()
DBICs = c()
times = c()
W1d_rep = c()
timeds= c()
n <- 500
ps_true <- rep(1/4, 4)
ps.true = ps_true
table_record = tibble(W1_rep = c(),
W1d_rep = c(),
BICs = c(),
DBICs = c(),
times = c(),
timeds= c(),
N0_comps = c(),
N0_comps_d = c())
# Means: 4 vectors in 6D
mus_true <- list()
for (i in 1:4) {
mu <- rep(0, 6)
mu[i] <- 3
mus_true[[i]] <- mu
}
mus.true = do.call(rbind, mus_true)
# Identity covariance for each component
Sigmas_true <- replicate(4, diag(6), simplify = FALSE)
for (rep in 1:20)
{
print(paste("The replicate ",rep))
gen <- gen_mix_Gauss(n, ps_true, mus_true, Sigmas_true)
X <- gen$X # data matrix (2000 × 6)
y <- gen$y # cluster labels (1–4)
tic()
fit <- Mclust(X, G = 1:10, modelNames = "VVV" )
# summary(fit)
print(paste("BIC", min(-fit$BIC) ))
t <- toc(log = TRUE, quiet = TRUE) # stop timer, don’t print
ts <- t$toc - t$tic
P_hat = fit$parameters$pro
Mus_hat = fit$parameters$mean
Mus_hat = lapply(1:ncol(Mus_hat), function(j) Mus_hat[, j])
Mus_hat <- do.call(rbind, Mus_hat)
W1 <- wasserstein_mixture(ps.true, mus.true, P_hat, Mus_hat)
#### Dendrogram
tic()
# --- 1. Fit mixture model in 6D ---
fit2 <- Mclust(X, G = 10, modelNames = "VVV")
# summary(fit)
# Estimated parameters
p_hat = fit2$parameters$pro
mus_hat = fit2$parameters$mean # means of clusters
mus_hat =lapply(1:ncol(mus_hat), function(j) mus_hat[, j])
mus_hat <- do.call(rbind, mus_hat)
sigma_hat <- fit2$parameters$variance$sigma # covariance matrices
# fit$classification[1:10] # cluster labels for first 10 points
Sigma_list <- lapply(1:dim(sigma_hat)[3], function(i) sigma_hat[,,i])
dsc_result <- dsc_location_scale_gaussian_R(pi = p_hat, mus = mus_hat, Sigmas = Sigma_list)
td <- toc(log = TRUE, quiet = TRUE) # stop timer, don’t print
tds <- td$toc - td$tic
results <- vapply(seq_along(Sigma_list), function(i) {
ps_merge <- dsc_result$pi[[i]]
mus_merge <- dsc_result$mus[[i]]
Sigmas_merge <- dsc_result$Sigmas[[i]]
llh <- gmm_loglik(X, ps_merge, mus_merge, Sigmas_merge)
bic <- bic_gmm(llh, n = n, k = length(ps_merge), d = 6, cov = "full")
c(llh = llh, bic = bic)
}, numeric(2))
# llhs <- results["llh", ]
dbics <- results["bic", ]
ind_out = which.min(dbics)
print(paste("DBIC", min(dbics)))
ps_merge <- dsc_result$pi[[ind_out]]
list_group_merge <- dsc_result$l[[ind_out]]
mus_merge <- dsc_result$mus[[ind_out]]
Sigmas_merge <- dsc_result$Sigmas[[ind_out]]
K_list <- dsc_result$K
d_list <- dsc_result$d
W1d <- wasserstein_mixture(ps.true, mus.true, ps_merge, mus_merge)
tr = tibble(W1_rep = W1,
W1d_rep = W1d,
BICs = min(-fit$BIC),
DBICs = min(dbics),
times = ts,
timeds= tds,
N0_comps = length(P_hat),
N0_comps_d = length(ps_merge))
table_record = bind_rows(table_record, tr)
}
[1] "The replicate 1"
[1] "BIC 10365.3669590681"
[1] "DBIC 10413.097437157"
[1] "The replicate 2"
[1] "BIC 10281.0498656991"
[1] "DBIC 10333.7001698371"
[1] "The replicate 3"
[1] "BIC 10361.1336797541"
[1] "DBIC 10363.5452182583"
[1] "The replicate 4"
[1] "BIC 10312.4294769872"
[1] "DBIC 10333.665726707"
[1] "The replicate 5"
[1] "BIC 10194.8265898834"
[1] "DBIC 10210.0348941035"
[1] "The replicate 6"
[1] "BIC 10375.5488572597"
[1] "DBIC 10371.1437538359"
[1] "The replicate 7"
[1] "BIC 10373.3491515159"
[1] "DBIC 10402.2700710546"
[1] "The replicate 8"
[1] "BIC 10399.0576405478"
[1] "DBIC 10382.6694961063"
[1] "The replicate 9"
[1] "BIC 10236.3853474968"
[1] "DBIC 10216.7224646882"
[1] "The replicate 10"
[1] "BIC 10395.144345086"
[1] "DBIC 10428.2925197005"
[1] "The replicate 11"
[1] "BIC 10493.6771709449"
[1] "DBIC 10499.2484153577"
[1] "The replicate 12"
[1] "BIC 10277.0296353253"
[1] "DBIC 10324.7239803074"
[1] "The replicate 13"
[1] "BIC 10236.3578137963"
[1] "DBIC 10246.6880286289"
[1] "The replicate 14"
[1] "BIC 10232.1449702883"
[1] "DBIC 10242.7208878659"
[1] "The replicate 15"
[1] "BIC 10256.9609592775"
[1] "DBIC 10281.1423051682"
[1] "The replicate 16"
[1] "BIC 10173.9412235341"
[1] "DBIC 10200.9731211174"
[1] "The replicate 17"
[1] "BIC 10209.6176114794"
[1] "DBIC 10234.5946606171"
[1] "The replicate 18"
[1] "BIC 10167.1003028829"
[1] "DBIC 10201.5371861854"
[1] "The replicate 19"
[1] "BIC 10309.7504657148"
[1] "DBIC 10347.0360240598"
[1] "The replicate 20"
[1] "BIC 10362.3160491947"
[1] "DBIC 10370.675524493"
# saveRDS(table_record, paste("comparison_weak_equal_separate_n_",n,".rds", sep = ""))
for (rep in 1:50)
{
tic()
# --- 1. Fit mixture model in 6D ---
fit <- Mclust(X, G = 10, modelNames = "VVV")
# summary(fit)
# Estimated parameters
p_hat = fit$parameters$pro
mus_hat = fit$parameters$mean # means of clusters
mus_hat =lapply(1:ncol(mus_hat), function(j) mus_hat[, j])
mus_hat <- do.call(rbind, mus_hat)
sigma_hat <- fit$parameters$variance$sigma # covariance matrices
# fit$classification[1:10] # cluster labels for first 10 points
Sigma_list <- lapply(1:dim(sigma_hat)[3], function(i) sigma_hat[,,i])
dsc_result <- dsc_location_scale_gaussian_R(pi = p_hat, mus = mus_hat, Sigmas = Sigma_list)
td <- toc(log = TRUE, quiet = TRUE) # stop timer, don’t print
timeds[rep] <- td$toc - td$tic
results <- vapply(seq_along(Sigma_list), function(i) {
ps_merge <- dsc_result$pi[[i]]
mus_merge <- dsc_result$mus[[i]]
Sigmas_merge <- dsc_result$Sigmas[[i]]
llh <- gmm_loglik_fast(X, ps_merge, mus_merge, Sigmas_merge)
bic <- bic_gmm(llh, n = n, k = length(ps_merge), d = 6, cov = "full")
c(llh = llh, bic = bic)
}, numeric(2))
# llhs <- results["llh", ]
DBICs <- results["bic", ]
ind_out = which.min(DBICs)
ps_merge <- dsc_result$pi[[ind_out]]
list_group_merge <- dsc_result$l[[ind_out]]
mus_merge <- dsc_result$mus[[ind_out]]
Sigmas_merge <- dsc_result$Sigmas[[ind_out]]
K_list <- dsc_result$K
d_list <- dsc_result$d
W1d_rep[rep] <- wasserstein_mixture(ps.true, mus.true, ps_merge, mus_merge)
}
W1_rep = c()
BICs = c()
DBICs = c()
times = c()
W1d_rep = c()
timeds= c()
n <- 500
p_rest = (1-0.05)/3
ps_true <- replicate(4, p_rest)
ps_true[1] = 0.05
ps.true = ps_true
table_record = tibble(W1_rep = c(),
W1d_rep = c(),
BICs = c(),
DBICs = c(),
times = c(),
timeds= c(),
N0_comps = c(),
N0_comps_d = c())
# Means: 4 vectors in 6D
# mus_true
mus_true <- list()
for (i in 1:4) {
mu <- rep(0, 6)
mu[i] <- 3
mus_true[[i]] <- mu
}
mus_true[[3]] = mus_true[[3]]/3
mus_true[[4]] = mus_true[[4]]/3
mus.true = do.call(rbind, mus_true)
# Sigmas_true
Sigmas_true <- replicate(4, diag(6), simplify = FALSE)
for (rep in 1:20)
{
print(paste("The replicate ",rep))
gen <- gen_mix_Gauss(n, ps_true, mus_true, Sigmas_true)
X <- gen$X # data matrix (2000 × 6)
y <- gen$y # cluster labels (1–4)
tic()
fit <- Mclust(X, G = 1:10, modelNames = "VVV" )
# summary(fit)
print(paste("BIC", min(-fit$BIC) ))
t <- toc(log = TRUE, quiet = TRUE) # stop timer, don’t print
ts <- t$toc - t$tic
P_hat = fit$parameters$pro
Mus_hat = fit$parameters$mean
Mus_hat = lapply(1:ncol(Mus_hat), function(j) Mus_hat[, j])
Mus_hat <- do.call(rbind, Mus_hat)
W1 <- wasserstein_mixture(ps.true, mus.true, P_hat, Mus_hat)
#### Dendrogram
tic()
# --- 1. Fit mixture model in 6D ---
fit2 <- Mclust(X, G = 10, modelNames = "VVV")
# summary(fit)
# Estimated parameters
p_hat = fit2$parameters$pro
mus_hat = fit2$parameters$mean # means of clusters
mus_hat =lapply(1:ncol(mus_hat), function(j) mus_hat[, j])
mus_hat <- do.call(rbind, mus_hat)
sigma_hat <- fit2$parameters$variance$sigma # covariance matrices
# fit$classification[1:10] # cluster labels for first 10 points
Sigma_list <- lapply(1:dim(sigma_hat)[3], function(i) sigma_hat[,,i])
dsc_result <- dsc_location_scale_gaussian_R(pi = p_hat, mus = mus_hat, Sigmas = Sigma_list)
td <- toc(log = TRUE, quiet = TRUE) # stop timer, don’t print
tds <- td$toc - td$tic
results <- vapply(seq_along(Sigma_list), function(i) {
ps_merge <- dsc_result$pi[[i]]
mus_merge <- dsc_result$mus[[i]]
Sigmas_merge <- dsc_result$Sigmas[[i]]
llh <- gmm_loglik(X, ps_merge, mus_merge, Sigmas_merge)
bic <- bic_gmm(llh, n = n, k = length(ps_merge), d = 6, cov = "full")
c(llh = llh, bic = bic)
}, numeric(2))
# llhs <- results["llh", ]
dbics <- results["bic", ]
ind_out = which.min(dbics)
print(paste("DBIC", min(dbics)))
ps_merge <- dsc_result$pi[[ind_out]]
list_group_merge <- dsc_result$l[[ind_out]]
mus_merge <- dsc_result$mus[[ind_out]]
Sigmas_merge <- dsc_result$Sigmas[[ind_out]]
K_list <- dsc_result$K
d_list <- dsc_result$d
W1d <- wasserstein_mixture(ps.true, mus.true, ps_merge, mus_merge)
tr = tibble(W1_rep = W1,
W1d_rep = W1d,
BICs = min(-fit$BIC),
DBICs = min(dbics),
times = ts,
timeds= tds,
N0_comps = length(P_hat),
N0_comps_d = length(ps_merge))
table_record = bind_rows(table_record, tr)
}
[1] "The replicate 1"
[1] "BIC 9587.67994869208"
[1] "DBIC 9587.67055613819"
[1] "The replicate 2"
[1] "BIC 9552.80471461943"
[1] "DBIC 9552.80377487441"
[1] "The replicate 3"
[1] "BIC 9495.60907075738"
[1] "DBIC 9495.60859684627"
[1] "The replicate 4"
[1] "BIC 9515.13124563094"
[1] "DBIC 9515.13085629009"
[1] "The replicate 5"
[1] "BIC 9658.97783315212"
[1] "DBIC 9658.96854497761"
[1] "The replicate 6"
[1] "BIC 9504.20781612024"
[1] "DBIC 9504.20775575501"
[1] "The replicate 7"
[1] "BIC 9636.57253274462"
[1] "DBIC 9636.57236218899"
[1] "The replicate 8"
[1] "BIC 9576.74428986305"
[1] "DBIC 9576.74406803459"
[1] "The replicate 9"
[1] "BIC 9392.32767379677"
[1] "DBIC 9392.32727227006"
[1] "The replicate 10"
[1] "BIC 9517.2796993098"
[1] "DBIC 9517.27042781164"
[1] "The replicate 11"
[1] "BIC 9411.97628401469"
[1] "DBIC 9411.97602498665"
[1] "The replicate 12"
[1] "BIC 9505.01357388764"
[1] "DBIC 9505.01312143638"
[1] "The replicate 13"
[1] "BIC 9630.61536476083"
[1] "DBIC 9630.61529211501"
[1] "The replicate 14"
[1] "BIC 9410.40365700684"
[1] "DBIC 9410.40284377255"
[1] "The replicate 15"
[1] "BIC 9396.78762519171"
[1] "DBIC 9396.78732062523"
[1] "The replicate 16"
[1] "BIC 9550.62126311383"
[1] "DBIC 9550.62108215508"
[1] "The replicate 17"
[1] "BIC 9554.481988093"
[1] "DBIC 9554.48163500315"
[1] "The replicate 18"
[1] "BIC 9395.93368852472"
[1] "DBIC 9395.93351285635"
[1] "The replicate 19"
[1] "BIC 9627.96224412455"
[1] "DBIC 9627.96210313418"
[1] "The replicate 20"
[1] "BIC 9570.57798722047"
[1] "DBIC 9570.57788802519"
saveRDS(table_record, paste("comparison_weak_diff_nonseparate_n_",n,".rds", sep=""))
sessionInfo()
R version 4.5.1 (2025-06-13)
Platform: aarch64-apple-darwin20
Running under: macOS Ventura 13.1
Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/4.5-arm64/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.5-arm64/Resources/lib/libRlapack.dylib; LAPACK version 3.12.1
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
time zone: America/Chicago
tzcode source: internal
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] transport_0.15-4 mvtnorm_1.3-3 lubridate_1.9.4 forcats_1.0.0
[5] stringr_1.5.1 dplyr_1.1.4 purrr_1.1.0 readr_2.1.5
[9] tidyr_1.3.1 tibble_3.3.0 tidyverse_2.0.0 tictoc_1.2.1
[13] cluster_2.1.8.1 ggplot2_3.5.2 mclust_6.1.1 matrixStats_1.5.0
[17] MASS_7.3-65 here_1.0.1
loaded via a namespace (and not attached):
[1] sass_0.4.10 generics_0.1.4 stringi_1.8.7 hms_1.1.3
[5] digest_0.6.37 magrittr_2.0.3 timechange_0.3.0 evaluate_1.0.4
[9] grid_4.5.1 RColorBrewer_1.1-3 fastmap_1.2.0 rprojroot_2.1.1
[13] workflowr_1.7.2 jsonlite_2.0.0 whisker_0.4.1 promises_1.3.3
[17] scales_1.4.0 codetools_0.2-20 jquerylib_0.1.4 cli_3.6.5
[21] rlang_1.1.6 withr_3.0.2 cachem_1.1.0 yaml_2.3.10
[25] tools_4.5.1 tzdb_0.5.0 httpuv_1.6.16 vctrs_0.6.5
[29] R6_2.6.1 lifecycle_1.0.4 git2r_0.36.2 fs_1.6.6
[33] pkgconfig_2.0.3 pillar_1.11.0 bslib_0.9.0 later_1.4.3
[37] gtable_0.3.6 data.table_1.17.8 glue_1.8.0 Rcpp_1.1.0
[41] xfun_0.53 tidyselect_1.2.1 rstudioapi_0.17.1 knitr_1.50
[45] farver_2.1.2 htmltools_0.5.8.1 rmarkdown_2.29 compiler_4.5.1