CRAN Package Check Results for Package modelbased

Last updated on 2025-03-11 17:50:51 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.9.0 5.80 120.04 125.84 ERROR
r-devel-linux-x86_64-debian-gcc 0.9.0 4.03 79.64 83.67 ERROR
r-devel-linux-x86_64-fedora-clang 0.10.0 330.00 ERROR
r-devel-linux-x86_64-fedora-gcc 0.10.0 329.56 ERROR
r-devel-macos-arm64 0.10.0 76.00 OK
r-devel-macos-x86_64 0.10.0 138.00 OK
r-devel-windows-x86_64 0.9.0 6.00 111.00 117.00 ERROR
r-patched-linux-x86_64 0.9.0 6.37 109.64 116.01 ERROR
r-release-linux-x86_64 0.9.0 5.42 108.20 113.62 ERROR
r-release-macos-arm64 0.10.0 77.00 OK
r-release-macos-x86_64 0.10.0 151.00 OK
r-release-windows-x86_64 0.10.0 11.00 166.00 177.00 OK
r-oldrel-macos-arm64 0.10.0 76.00 OK
r-oldrel-macos-x86_64 0.10.0 138.00 OK
r-oldrel-windows-x86_64 0.9.0 8.00 49.00 57.00 OK --no-examples --no-tests --no-vignettes

Check Details

Version: 0.9.0
Check: tests
Result: ERROR Running ‘testthat.R’ [55s/28s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] ══ Skipped tests (17) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:56:3' • On CRAN (13): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-predict-dpar.R:1:1', 'test-vcov.R:1:1' • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-debian-clang

Version: 0.9.0
Check: tests
Result: ERROR Running ‘testthat.R’ [35s/19s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] ══ Skipped tests (17) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:56:3' • On CRAN (13): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-predict-dpar.R:1:1', 'test-vcov.R:1:1' • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.10.0
Check: tests
Result: ERROR Running ‘testthat.R’ [106s/155s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 1 | WARN 12 | SKIP 39 | PASS 189 ] ══ Skipped tests (39) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:58:3' • On CRAN (31): 'test-backtransform_invlink.R:1:1', 'test-betareg.R:1:1', 'test-bias_correction.R:1:1', 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts-average.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_effectsize.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_filter.R:1:1', 'test-estimate_means-average.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_ci.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_dotargs.R:1:1', 'test-estimate_means_marginalization.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-estimate_slopes.R:97:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-keep_iterations.R:1:1', 'test-mice.R:1:1', 'test-ordinal.R:1:1', 'test-plot-grouplevel.R:1:1', 'test-predict-dpar.R:1:1', 'test-standardize.R:1:1', 'test-summary_estimate_slopes.R:3:1', 'test-transform_response.R:16:3', 'test-vcov.R:1:1', 'test-zeroinfl.R:1:1' • On Linux (6): 'test-plot-facet.R:1:1', 'test-plot-flexible_numeric.R:1:1', 'test-plot-ordinal.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1', 'test-scoping_issues.R:1:1' • utils::packageVersion("insight") <= "1.1.0" is TRUE (1): 'test-estimate_grouplevel.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_predicted.R:206:3'): estimate_expectation - predicting RE works ── out$Predicted (`actual`) not equal to c(...) (`expected`). `actual`: 12.2617 12.0693 11.1560 11.6318 11.1657 10.3811 11.1074 11.0749 `expected`: 12.2064 12.0631 11.2071 11.6286 11.2327 10.5839 11.2085 11.1229 [ FAIL 1 | WARN 12 | SKIP 39 | PASS 189 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 0.10.0
Check: tests
Result: ERROR Running ‘testthat.R’ [95s/72s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 1 | WARN 11 | SKIP 39 | PASS 189 ] ══ Skipped tests (39) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:58:3' • On CRAN (31): 'test-backtransform_invlink.R:1:1', 'test-betareg.R:1:1', 'test-bias_correction.R:1:1', 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts-average.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_effectsize.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_filter.R:1:1', 'test-estimate_means-average.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_ci.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_dotargs.R:1:1', 'test-estimate_means_marginalization.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-estimate_slopes.R:97:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-keep_iterations.R:1:1', 'test-mice.R:1:1', 'test-ordinal.R:1:1', 'test-plot-grouplevel.R:1:1', 'test-predict-dpar.R:1:1', 'test-standardize.R:1:1', 'test-summary_estimate_slopes.R:3:1', 'test-transform_response.R:16:3', 'test-vcov.R:1:1', 'test-zeroinfl.R:1:1' • On Linux (6): 'test-plot-facet.R:1:1', 'test-plot-flexible_numeric.R:1:1', 'test-plot-ordinal.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1', 'test-scoping_issues.R:1:1' • utils::packageVersion("insight") <= "1.1.0" is TRUE (1): 'test-estimate_grouplevel.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_predicted.R:206:3'): estimate_expectation - predicting RE works ── out$Predicted (`actual`) not equal to c(...) (`expected`). `actual`: 12.2617 12.0693 11.1560 11.6318 11.1657 10.3811 11.1074 11.0749 `expected`: 12.2064 12.0631 11.2071 11.6286 11.2327 10.5839 11.2085 11.1229 [ FAIL 1 | WARN 11 | SKIP 39 | PASS 189 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc

Version: 0.9.0
Check: tests
Result: ERROR Running 'testthat.R' [28s] Running the tests in 'tests/testthat.R' failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 23 | PASS 168 ] ══ Skipped tests (23) ══════════════════════════════════════════════════════════ • On CRAN (23): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-plot-facet.R:7:1', 'test-plot.R:7:1', 'test-predict-dpar.R:1:1', 'test-print.R:14:3', 'test-print.R:26:3', 'test-print.R:37:3', 'test-print.R:50:3', 'test-print.R:65:3', 'test-print.R:78:5', 'test-print.R:92:3', 'test-print.R:106:3', 'test-vcov.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 23 | PASS 168 ] Error: Test failures Execution halted Flavor: r-devel-windows-x86_64

Version: 0.9.0
Check: tests
Result: ERROR Running ‘testthat.R’ [49s/26s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] ══ Skipped tests (17) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:56:3' • On CRAN (13): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-predict-dpar.R:1:1', 'test-vcov.R:1:1' • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] Error: Test failures Execution halted Flavor: r-patched-linux-x86_64

Version: 0.9.0
Check: tests
Result: ERROR Running ‘testthat.R’ [47s/25s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > library(testthat) > library(modelbased) > > test_check("modelbased") Starting 2 test processes [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] ══ Skipped tests (17) ══════════════════════════════════════════════════════════ • .Platform$OS.type == "windows" is not TRUE (1): 'test-estimate_predicted.R:56:3' • On CRAN (13): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1', 'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1', 'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1', 'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1', 'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1', 'test-predict-dpar.R:1:1', 'test-vcov.R:1:1' • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ── dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`). `actual`: 3 5 `expected`: 10 5 ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ── dim(estim) (`actual`) not equal to c(10, 6) (`expected`). `actual`: 3.0 6.0 `expected`: 10.0 6.0 [ FAIL 3 | WARN 0 | SKIP 17 | PASS 166 ] Error: Test failures Execution halted Flavor: r-release-linux-x86_64