Skip to content

add subsampling support for external backends#1870

Open
vandenman wants to merge 1 commit intopaul-buerkner:masterfrom
vandenman:pdmps
Open

add subsampling support for external backends#1870
vandenman wants to merge 1 commit intopaul-buerkner:masterfrom
vandenman:pdmps

Conversation

@vandenman
Copy link
Copy Markdown

Hi! I'm working on PDMPSamplersR, an R interface to the Julia package PDMPSamplers.jl, we briefly were in touch via email and I finally found the time to create a clean-ish PR.

PDMPs (Piecewise Deterministic Markov Processes) are continuous-time MCMC algorithms that remain exact when we replace the gradient by an unbiased estimator. A common approach is to take subsamples of the full dataset. This is particularly applicable to GLMs and thus brms because we can replace y and X by a random subsample and everything still works.

From a user perspective, this PR adds a subsampling() function and adds the argument subsample to stancode(). For a mixed-effects model like y ~ x1 + x2 + (1 | group) + (1 | subgroup), the external backend (i.e., PDMPSamplersR) calls:

sub <- brms::subsampling(
  size_fn  = "pdmp_get_subsample_size",
  index_fn = "pdmp_get_subsample_index",
  wrap     = list(Y = "get_subsampled_Y_int", Xc = "get_subsampled_Xc")
)
scode <- brms::stancode(formula, data, family, subsample = sub)

and the generated model block changes from:

 model {
   if (!prior_only) {
-    vector[N] mu = rep_vector(0.0, N);
+    vector[pdmp_get_subsample_size()] mu = rep_vector(0.0, pdmp_get_subsample_size());
     mu += Intercept;
-    for (n in 1:N) {
+    for (n in 1:pdmp_get_subsample_size()) {
+      int nn = pdmp_get_subsample_index(n);
-      mu[n] += r_1_1[J_1[n]] * Z_1_1[n] + r_2_1[J_2[n]] * Z_2_1[n];
+      mu[n] += r_1_1[J_1[nn]] * Z_1_1[nn] + r_2_1[J_2[nn]] * Z_2_1[nn];
     }
-    target += bernoulli_logit_glm_lpmf(Y | Xc, mu, b);
+    target += bernoulli_logit_glm_lpmf(get_subsampled_Y_int(Y) | get_subsampled_Xc(Xc), mu, b);
   }

The external C++ functions (pdmp_get_subsample_size, pdmp_get_subsample_index, get_subsampled_Y_int, get_subsampled_Xc) are provided by the backend via stanvars and compiled into the Stan model (that part worked surprisingly easy, all the necessary stuff was already in brms). While sampling, the Julia code calls these C++ functions to resample the active indices. At first I passed new data to a compiled Stan model via BrideStan, which would not require changes to brms, but that turned out to too slow to be useful, hence this PR. I've seen ideas for subsampling with Stan before on GitHub (e.g., stan-dev/gmo#3), so I tried to write this in a way that it could also benefit others.

Here is a small benchmark on a Bernoulli logistic regression (N=500, excluding compilation time):

Fixed effects (y ~ x1 + ... + x10, m=50):

method wall time (s) min ESS grad evals min ESS/s min ESS/grad
Stan / NUTS 0.27 1674 6772 6246 0.247
PDMP (full) 10.0 12178 262715 1215 0.046
PDMP + sub (m=50) 2.24 13856 48051 6172 0.288

Mixed effects (y ~ x1 + x2 + (1 | group) + (1 | subgroup), 20 groups, 5 subgroups, m=50):

method wall time (s) min ESS grad evals min ESS/s min ESS/grad
Stan / NUTS 3.38 473 26578 140 0.018
PDMP (full) 43.7 659 482273 15 0.001
PDMP + sub (m=50) 7.24 4903 30922 677 0.159

The gradient evaluations for subsampled methods are scaled by m/N to reflect the per observation cost. Furthermore, the ESS for the PDMPs is computed using a continuous time algorithm rather than what rstan does, so take these numbers with a grain of salt. I think they look promising, but I'm not 100% sure yet if it's really a fair comparison.

Reprex code

Note: to run this you will need to install Julia as well.

library(PDMPSamplersR) # vandenman/PDMPSamplersR#brms_interface
library(brms) # vandenman/brms@pdmps

cache_dir <- file.path(tempdir(), "brms_pr_reprex_cache")
dir.create(cache_dir, showWarnings = FALSE, recursive = TRUE)

cache_brmsfit <- function(label, expr) {
  path <- file.path(cache_dir, paste0(label, ".rds"))
  if (file.exists(path)) {
    cli::cli_inform("Loading cached {label}")
    return(readRDS(path))
  }
  fit <- expr
  saveRDS(fit, path)
  fit
}

set.seed(42)

# Bernoulli logistic regression: y ~ x1 + ... + x10, N = 500
N <- 500L
p <- 10
X <- matrix(rnorm(N * p), nrow = N)
colnames(X) <- paste0("x", seq_len(p))
beta_true <- c(0.8, -1.2, 0.5, rep(0, p - 3))
intercept_true <- -0.3
eta <- intercept_true + X %*% beta_true
y <- rbinom(N, 1, plogis(eta))
df <- data.frame(y = y, X)

formula <- as.formula(paste("y ~", paste0("x", seq_len(p), collapse = " + ")))
family <- bernoulli()
T_sim <- 10000
m <- 50L

# ── Compilation phase (not timed) ─────────────────────────────────────────
cli::cli_h2("Compiling Stan model (brms)")
options(mc.cores = 1)
fit_stan_warmup <- cache_brmsfit("glm_stan", brm(
  formula, data = df, family = family,
  chains = 1, iter = 10, warmup = 5,
  cores = 1, seed = 1, refresh = 0, silent = 2
))

cli::cli_h2("Compiling PDMP models (BridgeStan + Julia)")
fit_pdmp_warmup <- brm_pdmp(formula, data = df, family = family,
                            flow = "AdaptiveBoomerang",
                            T = 100, n_chains = 1L,
                            show_progress = FALSE)

fit_sub_warmup <- brm_pdmp(formula, data = df, family = family,
                           flow = "AdaptiveBoomerang",
                           adaptive_scheme = "diagonal",
                           T = 100, n_chains = 1L,
                           subsample_size = m, resample_dt = 5,
                           n_anchor_updates = 20L,
                           use_anchor_bank = TRUE,
                           hvp_mode = "none",
                           use_fd_hvp = TRUE,
                           show_progress = FALSE)
rm(fit_pdmp_warmup, fit_sub_warmup)

# ── Sampling phase ────────────────────────────────────────────────────────

# --- 1. Stan / NUTS ---
cli::cli_h3("Stan / NUTS")
fit_stan <- brm(formula, data = df, family = family,
                fit = fit_stan_warmup,
                chains = 1, iter = 2000, warmup = 1000,
                cores = 1, seed = 1, refresh = 0, silent = 2)
time_stan <- sum(rstan::get_elapsed_time(fit_stan$fit))

# --- 2. PDMP (full data) ---
cli::cli_h3("PDMP (full data)")
fit_pdmp <- brm_pdmp(formula, data = df, family = family,
                     flow = "AdaptiveBoomerang",
                     T = T_sim, n_chains = 1L,
                     show_progress = FALSE)
time_pdmp <- as.numeric(attr(fit_pdmp, "pdmp_stats")$elapsed_time)

# --- 3. PDMP + subsampling ---
cli::cli_h3("PDMP + subsampling")
fit_sub <- brm_pdmp(formula, data = df, family = family,
                    flow = "AdaptiveBoomerang",
                    adaptive_scheme = "diagonal",
                    T = T_sim, n_chains = 1L,
                    subsample_size = m, resample_dt = 5,
                    n_anchor_updates = 100L,
                    use_anchor_bank = TRUE,
                    hvp_mode = "none",
                    use_fd_hvp = TRUE,
                    show_progress = FALSE)
time_sub <- as.numeric(attr(fit_sub, "pdmp_stats")$elapsed_time)

# ── Results ───────────────────────────────────────────────────────────────
stan_grad_evals <- sum(sapply(
  rstan::get_sampler_params(fit_stan$fit, inc_warmup = FALSE),
  function(x) sum(x[, "n_leapfrog__"])
))
pdmp_grad_evals <- {
  ps <- attr(fit_pdmp, "pdmp_stats")
  ps$gradient_calls + ps$hessian_calls
}
sub_grad_evals <- {
  ps <- attr(fit_sub, "pdmp_stats")
  (ps$gradient_calls + ps$hessian_calls) * m / N
}

extract_stats <- function(fit, elapsed, grad_evals, fit_ref = NULL) {
  s <- summary(fit)$fixed
  ess_bulk <- s[, "Bulk_ESS"]

  ps <- attr(fit, "pdmp_stats")
  has_ct_ess <- !is.null(ps) && !is.null(ps$ct_ess_min) && is.finite(ps$ct_ess_min)
  ct_ess_min <- if (has_ct_ess) as.numeric(ps$ct_ess_min) else NA_real_
  ct_ess_med <- if (!is.null(ps) && !is.null(ps$ct_ess_median) && is.finite(ps$ct_ess_median)) {
    as.numeric(ps$ct_ess_median)
  } else if (!is.null(ps) && !is.null(ps$ct_ess_med) && is.finite(ps$ct_ess_med)) {
    as.numeric(ps$ct_ess_med)
  } else {
    NA_real_
  }

  min_ess <- if (has_ct_ess) ct_ess_min else min(ess_bulk)
  med_ess <- if (has_ct_ess) ct_ess_med else median(ess_bulk)

  out <- list(
    min_ess = min_ess,
    med_ess = med_ess,
    min_ess_s = if (!is.na(min_ess)) min_ess / elapsed else NA_real_,
    med_ess_s = if (!is.na(med_ess)) med_ess / elapsed else NA_real_,
    elapsed = elapsed,
    grad_evals = grad_evals,
    min_ess_grad = if (!is.na(min_ess)) min_ess / grad_evals else NA_real_,
    med_ess_grad = if (!is.na(med_ess)) med_ess / grad_evals else NA_real_
  )

  if (!is.null(fit_ref)) {
    fe_ref <- brms::fixef(fit_ref)
    fe <- brms::fixef(fit)

    common_terms <- intersect(rownames(fe), rownames(fe_ref))
    if (length(common_terms) > 0) {
      z <- (fe[common_terms, "Estimate"] - fe_ref[common_terms, "Estimate"]) /
        fe_ref[common_terms, "Est.Error"]
      out$max_abs_z <- max(abs(z), na.rm = TRUE)
    } else {
      out$max_abs_z <- NA_real_
    }
  } else {
    out$max_abs_z <- NA_real_
  }
  out
}

z_diagnostics <- function(fit, fit_ref, label, top_n = 5L) {
  fe_ref <- brms::fixef(fit_ref)
  fe <- brms::fixef(fit)
  common_terms <- intersect(rownames(fe), rownames(fe_ref))

  if (length(common_terms) == 0L) {
    return(tibble::tibble(method = label, term = character(), z = numeric(),
                          estimate = numeric(), reference = numeric(), ref_se = numeric()))
  }

  z <- (fe[common_terms, "Estimate"] - fe_ref[common_terms, "Estimate"]) /
    fe_ref[common_terms, "Est.Error"]

  out <- tibble::tibble(
    method = label,
    term = common_terms,
    z = as.numeric(z),
    estimate = as.numeric(fe[common_terms, "Estimate"]),
    reference = as.numeric(fe_ref[common_terms, "Estimate"]),
    ref_se = as.numeric(fe_ref[common_terms, "Est.Error"])
  )

  out[order(abs(out$z), decreasing = TRUE), , drop = FALSE][seq_len(min(top_n, nrow(out))), , drop = FALSE]
}

stats_stan <- extract_stats(fit_stan, time_stan, stan_grad_evals)
stats_pdmp <- extract_stats(fit_pdmp, time_pdmp, pdmp_grad_evals, fit_ref = fit_stan)
stats_sub <- extract_stats(fit_sub, time_sub, sub_grad_evals, fit_ref = fit_stan)

results_tbl <- tibble::tibble(
  method          = c("Stan / NUTS", "PDMP (full)", "AB diag sub+bank+FD (m=50)"),
  wall_time_s     = c(time_stan, time_pdmp, time_sub),
  max_abs_z       = c(stats_stan$max_abs_z, stats_pdmp$max_abs_z, stats_sub$max_abs_z),
  min_ESS         = c(stats_stan$min_ess, stats_pdmp$min_ess, stats_sub$min_ess),
  med_ESS         = c(stats_stan$med_ess, stats_pdmp$med_ess, stats_sub$med_ess),
  grad_evals      = c(stats_stan$grad_evals, stats_pdmp$grad_evals, stats_sub$grad_evals),
  `min_ESS/s`     = c(stats_stan$min_ess_s, stats_pdmp$min_ess_s, stats_sub$min_ess_s),
  `med_ESS/s`     = c(stats_stan$med_ess_s, stats_pdmp$med_ess_s, stats_sub$med_ess_s),
  `min_ESS/grad`  = c(stats_stan$min_ess_grad, stats_pdmp$min_ess_grad, stats_sub$min_ess_grad),
  `med_ESS/grad`  = c(stats_stan$med_ess_grad, stats_pdmp$med_ess_grad, stats_sub$med_ess_grad)
)

keep_cols <- vapply(results_tbl, function(col) !any(is.na(col)), logical(1))
results_tbl_no_na <- results_tbl[, keep_cols, drop = FALSE]

print(results_tbl_no_na, n = Inf, width = Inf)

z_diag_tbl <- rbind(
  z_diagnostics(fit_pdmp, fit_stan, "PDMP (full)", top_n = 8L),
  z_diagnostics(fit_sub, fit_stan, "AB diag sub+bank+FD (m=50)", top_n = 8L)
)

cli::cli_h3("Top |z| terms vs Stan")
print(z_diag_tbl, n = Inf, width = Inf)

# ══════════════════════════════════════════════════════════════════════════
# Part 2: Mixed-effects logistic regression
# ══════════════════════════════════════════════════════════════════════════
cli::cli_h1("Mixed-effects logistic regression")

set.seed(123)
N_mix <- 500L
n_groups <- 20L
n_subgroups <- 5L

group <- sample(seq_len(n_groups), N_mix, replace = TRUE)
subgroup <- sample(seq_len(n_subgroups), N_mix, replace = TRUE)
x1 <- rnorm(N_mix)
x2 <- rnorm(N_mix)

beta0_mix <- -0.5
beta_mix <- c(0.8, -0.6)
sd_group <- 0.7
sd_subgroup <- 0.4
re_group <- rnorm(n_groups, sd = sd_group)
re_subgroup <- rnorm(n_subgroups, sd = sd_subgroup)

eta_mix <- beta0_mix + beta_mix[1] * x1 + beta_mix[2] * x2 +
  re_group[group] + re_subgroup[subgroup]
y_mix <- rbinom(N_mix, 1, plogis(eta_mix))

df_mix <- data.frame(y = y_mix, x1 = x1, x2 = x2,
                     group = factor(group), subgroup = factor(subgroup))

formula_mix <- y ~ x1 + x2 + (1 | group) + (1 | subgroup)
m_mix <- 50L

# ── Compilation phase ─────────────────────────────────────────────────────
cli::cli_h2("Compiling mixed Stan model")
fit_mix_stan_warmup <- cache_brmsfit("mix_stan", brm(
  formula_mix, data = df_mix, family = family,
  chains = 1, iter = 10, warmup = 5,
  cores = 1, seed = 1, refresh = 0, silent = 2
))

cli::cli_h2("Compiling mixed PDMP models")
fit_mix_pdmp_warmup <- brm_pdmp(formula_mix, data = df_mix, family = family,
                                flow = "AdaptiveBoomerang",
                                T = 100, n_chains = 1L,
                                show_progress = FALSE)

fit_mix_sub_warmup <- brm_pdmp(formula_mix, data = df_mix, family = family,
                               flow = "AdaptiveBoomerang",
                               T = 100, n_chains = 1L,
                               subsample_size = m_mix,
                               show_progress = FALSE)
rm(fit_mix_pdmp_warmup, fit_mix_sub_warmup)

# ── Sampling phase ────────────────────────────────────────────────────────

# --- 1. Stan / NUTS (mixed) ---
cli::cli_h3("Stan / NUTS (mixed)")
fit_mix_stan <- brm(formula_mix, data = df_mix, family = family,
                    fit = fit_mix_stan_warmup,
                    chains = 1, iter = 2000, warmup = 1000,
                    cores = 1, seed = 1, refresh = 0, silent = 2)
time_mix_stan <- sum(rstan::get_elapsed_time(fit_mix_stan$fit))

# --- 2. PDMP full (mixed) ---
cli::cli_h3("PDMP full (mixed)")
fit_mix_pdmp <- brm_pdmp(formula_mix, data = df_mix, family = family,
                         flow = "AdaptiveBoomerang",
                         T = T_sim, n_chains = 1L,
                         show_progress = FALSE)
time_mix_pdmp <- as.numeric(attr(fit_mix_pdmp, "pdmp_stats")$elapsed_time)

# --- 3. PDMP + subsampling (mixed) ---
cli::cli_h3("PDMP + subsampling (mixed)")
fit_mix_sub <- brm_pdmp(formula_mix, data = df_mix, family = family,
                        flow = "AdaptiveBoomerang",
                        T = T_sim, n_chains = 1L,
                        subsample_size = m_mix,
                        show_progress = FALSE)
time_mix_sub <- as.numeric(attr(fit_mix_sub, "pdmp_stats")$elapsed_time)

# ── Mixed-model results ──────────────────────────────────────────────────
mix_stan_grad <- sum(sapply(
  rstan::get_sampler_params(fit_mix_stan$fit, inc_warmup = FALSE),
  function(x) sum(x[, "n_leapfrog__"])
))
mix_pdmp_grad <- {
  ps <- attr(fit_mix_pdmp, "pdmp_stats")
  ps$gradient_calls + ps$hessian_calls
}
mix_sub_grad <- {
  ps <- attr(fit_mix_sub, "pdmp_stats")
  (ps$gradient_calls + ps$hessian_calls) * m_mix / N_mix
}

# Summarize random-effect accuracy vs Stan reference
re_summary <- function(fit, fit_ref, label) {
  re <- brms::ranef(fit)
  re_ref <- brms::ranef(fit_ref)

  rows <- lapply(names(re_ref), function(grp) {
    if (is.null(re[[grp]])) return(NULL)
    est <- re[[grp]][, , "Intercept", drop = TRUE]
    ref <- re_ref[[grp]][, , "Intercept", drop = TRUE]
    common <- intersect(rownames(est), rownames(ref))
    if (length(common) == 0L) return(NULL)
    e <- est[common, "Estimate"]
    r <- ref[common, "Estimate"]
    se <- ref[common, "Est.Error"]
    z <- (e - r) / se
    tibble::tibble(
      method = label,
      grouping = grp,
      n_levels = length(common),
      rmse_vs_ref = sqrt(mean((e - r)^2)),
      cor_vs_ref = cor(e, r),
      mean_abs_z = mean(abs(z)),
      max_abs_z = max(abs(z))
    )
  })
  do.call(rbind, rows)
}

stats_mix_stan <- extract_stats(fit_mix_stan, time_mix_stan, mix_stan_grad)
stats_mix_pdmp <- extract_stats(fit_mix_pdmp, time_mix_pdmp, mix_pdmp_grad,
                                fit_ref = fit_mix_stan)
stats_mix_sub <- extract_stats(fit_mix_sub, time_mix_sub, mix_sub_grad,
                               fit_ref = fit_mix_stan)

mix_results_tbl <- tibble::tibble(
  method          = c("Stan / NUTS", "PDMP (full)", "PDMP + sub (m=50)"),
  wall_time_s     = c(time_mix_stan, time_mix_pdmp, time_mix_sub),
  max_abs_z_fe    = c(stats_mix_stan$max_abs_z, stats_mix_pdmp$max_abs_z,
                      stats_mix_sub$max_abs_z),
  min_ESS         = c(stats_mix_stan$min_ess, stats_mix_pdmp$min_ess,
                      stats_mix_sub$min_ess),
  med_ESS         = c(stats_mix_stan$med_ess, stats_mix_pdmp$med_ess,
                      stats_mix_sub$med_ess),
  grad_evals      = c(stats_mix_stan$grad_evals, stats_mix_pdmp$grad_evals,
                       stats_mix_sub$grad_evals),
  `min_ESS/s`     = c(stats_mix_stan$min_ess_s, stats_mix_pdmp$min_ess_s,
                       stats_mix_sub$min_ess_s),
  `med_ESS/s`     = c(stats_mix_stan$med_ess_s, stats_mix_pdmp$med_ess_s,
                       stats_mix_sub$med_ess_s),
  `min_ESS/grad`  = c(stats_mix_stan$min_ess_grad, stats_mix_pdmp$min_ess_grad,
                       stats_mix_sub$min_ess_grad),
  `med_ESS/grad`  = c(stats_mix_stan$med_ess_grad, stats_mix_pdmp$med_ess_grad,
                       stats_mix_sub$med_ess_grad)
)

keep_mix <- vapply(mix_results_tbl, function(col) !any(is.na(col)), logical(1))

cli::cli_h2("Mixed-effects: timing & ESS")
print(mix_results_tbl[, keep_mix, drop = FALSE], n = Inf, width = Inf)

cli::cli_h2("Mixed-effects: fixed-effect z-scores vs Stan")
mix_z_fe <- rbind(
  z_diagnostics(fit_mix_pdmp, fit_mix_stan, "PDMP (full)", top_n = 5L),
  z_diagnostics(fit_mix_sub, fit_mix_stan, "PDMP + sub (m=50)", top_n = 5L)
)
print(mix_z_fe, n = Inf, width = Inf)

cli::cli_h2("Mixed-effects: random-effect accuracy vs Stan")
mix_re_tbl <- rbind(
  re_summary(fit_mix_pdmp, fit_mix_stan, "PDMP (full)"),
  re_summary(fit_mix_sub, fit_mix_stan, "PDMP + sub (m=50)")
)
print(mix_re_tbl, n = Inf, width = Inf)

Further ideas

Not part of this PR, but could reduce coupling between brms and external backends:

  • A public constructor for backend-produced fits. Currently, PDMPSamplersR create an empty brms object and then puts the samples back when it's done by doing empty_fit$fit <- stanfit; brms::rename_pars(empty_fit). Some kind of helper function could make that smoother.

Let me know what you think!

@paul-buerkner
Copy link
Copy Markdown
Owner

Thank you for opening this PR and for your interest in extending brms!

How general would this feature be? You implemented as an example but the goal would be to generalize it to all models that support it? Some kind of subsetting feature is already implemeneted in brms for within-chain parallelization (aka threading). Perhaps some code can be reused.

Personally, after seeing the PR and the interfaces, I am a bit hesistant to have this natively in brms because it would add quite a bit of maintenence burden. How complicated would it be to build a thin brms wrapper package instead that supports this algorithm for a selected subset of models (depending on your time to support more models)?

@vandenman
Copy link
Copy Markdown
Author

How general would this feature be? You implemented as an example but the goal would be to generalize it to all models that support it?

The subsampling is easiest to do for (general) linear models. I'm planning to also apply this to more advanced models that brms supports (for example, the mixtures here: https://easystats.github.io/modelbased/articles/practical_growthmixture.html), but I still have to look into that.

Personally, after seeing the PR and the interfaces, I am a bit hesistant to have this natively in brms because it would add quite a bit of maintenence burden. How complicated would it be to build a thin brms wrapper package instead that supports this algorithm for a selected subset of models (depending on your time to support more models)?

I completely understand, and perhaps there is a slight misunderstanding here because I did not intend for brms to take on a dependency on either of the packages I used above. My goal/ purpose of this PR is for brms to generate the relevant Stan code required for subsampling. The rest is handled by the software I've already written. This would hopefully minimize any additional maintenance burden for brms itself. In fact, because the PDMP stuff is quite novel and still a topic of active research, I think it's better to split this into separate packages.

The way this currently works, and how I would envision users using this as well, is that they call PDMPSamplers::brm_pdmp. This mimics brms::brm in most arguments and internally calls brms::stancode, brms::standata, and brms::brm(..., empty = TRUE) to create the relevant Stan models. So, PDMPSamplersR takes a dependency on brms (as Suggests), and not the other way around.

I'll look into reusing some more of the existing code to minimize the changes this week. In case you believe this would still be too much of a maintenance burden regardless, I completely understand and feel free to close the PR.

@paul-buerkner
Copy link
Copy Markdown
Owner

paul-buerkner commented Apr 1, 2026

I see, makes sense. Still, I would not like brms-main to have partial support for a feature for a small subset of models without this functionality being used directly in brms (but only for an outside package). For the time being, would it make sense to let this run on a brms branch? There you could play around and let users know they need to install this branch. It could either be on your fork of brms or on the official repo (here) but on a separate branch. If you prefer the latter, I can make a new branch here and you can then make the PR against that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants