CRAN Package Check Results for Package performance

Last updated on 2025-10-02 01:50:06 CEST.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.15.1 20.29 243.21 263.50 OK
r-devel-linux-x86_64-debian-gcc 0.15.1 13.28 158.18 171.46 ERROR
r-devel-linux-x86_64-fedora-clang 0.15.1 388.29 ERROR
r-devel-linux-x86_64-fedora-gcc 0.15.1 395.38 ERROR
r-devel-windows-x86_64 0.15.1 23.00 225.00 248.00 OK
r-patched-linux-x86_64 0.15.1 24.17 225.14 249.31 ERROR
r-release-linux-x86_64 0.15.1 19.11 229.45 248.56 OK
r-release-macos-arm64 0.15.1 90.00 OK
r-release-macos-x86_64 0.15.1 172.00 OK
r-release-windows-x86_64 0.15.1 20.00 223.00 243.00 OK
r-oldrel-macos-arm64 0.15.1 101.00 OK
r-oldrel-macos-x86_64 0.15.1 126.00 OK
r-oldrel-windows-x86_64 0.15.1 30.00 291.00 321.00 OK

Check Details

Version: 0.15.1
Check: tests
Result: ERROR Running ‘testthat.R’ [50s/27s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(performance) > > test_check("performance") Starting 2 test processes [ FAIL 1 | WARN 0 | SKIP 40 | PASS 406 ] ══ Skipped tests (40) ══════════════════════════════════════════════════════════ • On CRAN (36): 'test-bootstrapped_icc_ci.R:2:3', 'test-bootstrapped_icc_ci.R:44:3', 'test-binned_residuals.R:137:3', 'test-binned_residuals.R:164:3', 'test-check_dag.R:1:1', 'test-check_distribution.R:35:3', 'test-check_collinearity.R:181:3', 'test-check_collinearity.R:218:3', 'test-check_model.R:1:1', 'test-check_itemscale.R:31:3', 'test-check_itemscale.R:103:3', 'test-check_predictions.R:2:1', 'test-check_residuals.R:2:3', 'test-check_singularity.R:2:3', 'test-check_singularity.R:30:3', 'test-check_zeroinflation.R:73:3', 'test-check_zeroinflation.R:112:3', 'test-compare_performance.R:21:3', 'test-helpers.R:1:1', 'test-icc.R:2:1', 'test-item_omega.R:10:3', 'test-item_omega.R:31:3', 'test-mclogit.R:53:3', 'test-model_performance.bayesian.R:1:1', 'test-model_performance.lavaan.R:1:1', 'test-check_outliers.R:110:3', 'test-model_performance.merMod.R:2:3', 'test-model_performance.merMod.R:25:3', 'test-model_performance.psych.R:1:1', 'test-model_performance.rma.R:33:3', 'test-performance_reliability.R:23:3', 'test-pkg-ivreg.R:7:3', 'test-r2_nagelkerke.R:22:3', 'test-r2_nakagawa.R:20:1', 'test-rmse.R:35:3', 'test-test_likelihoodratio.R:55:1' • On Linux (3): 'test-nestedLogit.R:1:1', 'test-r2_bayes.R:1:1', 'test-test_wald.R:1:1' • getRversion() > "4.4.0" is TRUE (1): 'test-check_outliers.R:258:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-check_outliers.R:304:3'): pareto which ─────────────────────── which(check_outliers(model, method = "pareto", threshold = list(pareto = 0.5))) (`actual`) not identical to 17L (`expected`). `actual`: 17 18 `expected`: 17 [ FAIL 1 | WARN 0 | SKIP 40 | PASS 406 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.15.1
Check: tests
Result: ERROR Running ‘testthat.R’ [114s/59s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(performance) > > test_check("performance") Starting 2 test processes [ FAIL 1 | WARN 0 | SKIP 40 | PASS 406 ] ══ Skipped tests (40) ══════════════════════════════════════════════════════════ • On CRAN (36): 'test-bootstrapped_icc_ci.R:2:3', 'test-bootstrapped_icc_ci.R:44:3', 'test-binned_residuals.R:137:3', 'test-binned_residuals.R:164:3', 'test-check_dag.R:1:1', 'test-check_distribution.R:35:3', 'test-check_collinearity.R:181:3', 'test-check_collinearity.R:218:3', 'test-check_model.R:1:1', 'test-check_itemscale.R:31:3', 'test-check_itemscale.R:103:3', 'test-check_predictions.R:2:1', 'test-check_residuals.R:2:3', 'test-check_singularity.R:2:3', 'test-check_singularity.R:30:3', 'test-check_zeroinflation.R:73:3', 'test-check_zeroinflation.R:112:3', 'test-compare_performance.R:21:3', 'test-helpers.R:1:1', 'test-icc.R:2:1', 'test-item_omega.R:10:3', 'test-item_omega.R:31:3', 'test-mclogit.R:53:3', 'test-check_outliers.R:110:3', 'test-model_performance.bayesian.R:1:1', 'test-model_performance.lavaan.R:1:1', 'test-model_performance.merMod.R:2:3', 'test-model_performance.merMod.R:25:3', 'test-model_performance.psych.R:1:1', 'test-model_performance.rma.R:33:3', 'test-performance_reliability.R:23:3', 'test-pkg-ivreg.R:7:3', 'test-r2_nagelkerke.R:22:3', 'test-r2_nakagawa.R:20:1', 'test-rmse.R:35:3', 'test-test_likelihoodratio.R:55:1' • On Linux (3): 'test-nestedLogit.R:1:1', 'test-r2_bayes.R:1:1', 'test-test_wald.R:1:1' • getRversion() > "4.4.0" is TRUE (1): 'test-check_outliers.R:258:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-check_outliers.R:304:3'): pareto which ─────────────────────── which(check_outliers(model, method = "pareto", threshold = list(pareto = 0.5))) (`actual`) not identical to 17L (`expected`). `actual`: 17 18 `expected`: 17 [ FAIL 1 | WARN 0 | SKIP 40 | PASS 406 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 0.15.1
Check: tests
Result: ERROR Running ‘testthat.R’ [123s/81s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(performance) > > test_check("performance") Starting 2 test processes [ FAIL 1 | WARN 0 | SKIP 40 | PASS 406 ] ══ Skipped tests (40) ══════════════════════════════════════════════════════════ • On CRAN (36): 'test-bootstrapped_icc_ci.R:2:3', 'test-bootstrapped_icc_ci.R:44:3', 'test-binned_residuals.R:137:3', 'test-binned_residuals.R:164:3', 'test-check_dag.R:1:1', 'test-check_distribution.R:35:3', 'test-check_collinearity.R:181:3', 'test-check_collinearity.R:218:3', 'test-check_itemscale.R:31:3', 'test-check_itemscale.R:103:3', 'test-check_model.R:1:1', 'test-check_predictions.R:2:1', 'test-check_residuals.R:2:3', 'test-check_singularity.R:2:3', 'test-check_singularity.R:30:3', 'test-check_zeroinflation.R:73:3', 'test-check_zeroinflation.R:112:3', 'test-compare_performance.R:21:3', 'test-helpers.R:1:1', 'test-icc.R:2:1', 'test-item_omega.R:10:3', 'test-item_omega.R:31:3', 'test-mclogit.R:53:3', 'test-model_performance.bayesian.R:1:1', 'test-check_outliers.R:110:3', 'test-model_performance.lavaan.R:1:1', 'test-model_performance.merMod.R:2:3', 'test-model_performance.merMod.R:25:3', 'test-model_performance.psych.R:1:1', 'test-model_performance.rma.R:33:3', 'test-performance_reliability.R:23:3', 'test-pkg-ivreg.R:7:3', 'test-r2_nagelkerke.R:22:3', 'test-r2_nakagawa.R:20:1', 'test-rmse.R:35:3', 'test-test_likelihoodratio.R:55:1' • On Linux (3): 'test-nestedLogit.R:1:1', 'test-r2_bayes.R:1:1', 'test-test_wald.R:1:1' • getRversion() > "4.4.0" is TRUE (1): 'test-check_outliers.R:258:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-check_outliers.R:304:3'): pareto which ─────────────────────── which(check_outliers(model, method = "pareto", threshold = list(pareto = 0.5))) (`actual`) not identical to 17L (`expected`). `actual`: 17 18 `expected`: 17 [ FAIL 1 | WARN 0 | SKIP 40 | PASS 406 ] Error: Test failures Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc

Version: 0.15.1
Check: tests
Result: ERROR Running ‘testthat.R’ [75s/39s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(performance) > > test_check("performance") Starting 2 test processes [ FAIL 1 | WARN 0 | SKIP 40 | PASS 406 ] ══ Skipped tests (40) ══════════════════════════════════════════════════════════ • On CRAN (36): 'test-bootstrapped_icc_ci.R:2:3', 'test-bootstrapped_icc_ci.R:44:3', 'test-binned_residuals.R:137:3', 'test-binned_residuals.R:164:3', 'test-check_dag.R:1:1', 'test-check_distribution.R:35:3', 'test-check_collinearity.R:181:3', 'test-check_collinearity.R:218:3', 'test-check_model.R:1:1', 'test-check_itemscale.R:31:3', 'test-check_itemscale.R:103:3', 'test-check_predictions.R:2:1', 'test-check_residuals.R:2:3', 'test-check_singularity.R:2:3', 'test-check_singularity.R:30:3', 'test-check_zeroinflation.R:73:3', 'test-check_zeroinflation.R:112:3', 'test-compare_performance.R:21:3', 'test-helpers.R:1:1', 'test-icc.R:2:1', 'test-item_omega.R:10:3', 'test-item_omega.R:31:3', 'test-mclogit.R:53:3', 'test-check_outliers.R:110:3', 'test-model_performance.bayesian.R:1:1', 'test-model_performance.lavaan.R:1:1', 'test-model_performance.merMod.R:2:3', 'test-model_performance.merMod.R:25:3', 'test-model_performance.psych.R:1:1', 'test-model_performance.rma.R:33:3', 'test-performance_reliability.R:23:3', 'test-pkg-ivreg.R:7:3', 'test-r2_nagelkerke.R:22:3', 'test-r2_nakagawa.R:20:1', 'test-rmse.R:35:3', 'test-test_likelihoodratio.R:55:1' • On Linux (3): 'test-nestedLogit.R:1:1', 'test-r2_bayes.R:1:1', 'test-test_wald.R:1:1' • getRversion() > "4.4.0" is TRUE (1): 'test-check_outliers.R:258:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-check_outliers.R:304:3'): pareto which ─────────────────────── which(check_outliers(model, method = "pareto", threshold = list(pareto = 0.5))) (`actual`) not identical to 17L (`expected`). `actual`: 17 18 `expected`: 17 [ FAIL 1 | WARN 0 | SKIP 40 | PASS 406 ] Error: Test failures Execution halted Flavor: r-patched-linux-x86_64