Skip to content

Instantly share code, notes, and snippets.

@RaphaelS1
Last active May 9, 2023 21:32
Show Gist options
  • Save RaphaelS1/53f602304b9d19124372c9e0d0130155 to your computer and use it in GitHub Desktop.
Save RaphaelS1/53f602304b9d19124372c9e0d0130155 to your computer and use it in GitHub Desktop.
## select holdout as the resampling strategy
resampling <- rsmp("cv", folds = 3)
## add KM and CPH
learners <- c(learners, lrns(c("surv.kaplan", "surv.coxph")))
design <- benchmark_grid(tasks, learners, resampling)
bm <- benchmark(design)
## Aggreggate with Harrell's C and Integrated Graf Score
msrs <- msrs(c("surv.cindex", "surv.graf"))
bm$aggregate(msrs)[, c(3, 4, 7, 8)]
@millionj
Copy link

millionj commented Oct 8, 2021

Hello, when I want to create the benchmark, it is showing an error"Error in as.Distribution.matrix(cdf, fun = "cdf", decorators = c("CoreStatistics", :
'obj' must have column names"

Do you know why? Thanks!!!

@RaphaelS1
Copy link
Author

Hey I think you might need to upgrade all packages, sounds like you have a version of distr6 that is not compatible with older versions of mlr3

@pcstudy2019
Copy link

Hello, sorry to bother you. I have updated the distr6 package (1.6.2) and mlr3 package (0.13.0) to the latest, but the above problem still exists. ("Error in as.Distribution.matrix(cdf, fun = "cdf", decorators = c("CoreStatistics", 👍 'obj' must have column names")
Do you know how to solve it ? Thanks!!!

@RaphaelS1
Copy link
Author

Did you update mlr3proba? remotes::install_github('mlr-org/mlr3proba')

@pcstudy2019
Copy link

Thank you very much for your reply, I have used remotes::install_github('mlr-org/mlr3proba') to update mlr3proba, the mlr3proba verson is 0.4.2.9000 . when I run the code ,the error "Error in as.Distribution.matrix(cdf, fun = "cdf", decorators = c("CoreStatistics", : 'obj' must have column names This happened PipeOp surv.loghaz.tuned's $train())' still exist. Do you know how to solve it ? Thanks!!!

@VonKlotMD
Copy link

I am having the same problem .. benchmark runs just fine and then stops out of nowhere:

INFO [11:33:26.523] [mlr3] Running benchmark with 3 resampling iterations
INFO [11:33:26.529] [mlr3] Applying learner 'surv.coxtime' on task 'roche' (iter 1/3)
INFO [11:33:26.880] [mlr3] Applying learner 'surv.coxtime' on task 'roche' (iter 2/3)
INFO [11:33:27.216] [mlr3] Applying learner 'surv.coxtime' on task 'roche' (iter 3/3)
INFO [11:33:27.561] [mlr3] Finished benchmark
INFO [11:33:27.699] [bbotk] Result of batch 50:
INFO [11:33:27.702] [bbotk] dropout weight_decay learning_rate nodes k surv.harrell_c runtime_learners uhash
INFO [11:33:27.702] [bbotk] 0.4091729 0.3907034 0.1214102 31 4 0.5924093 1 2e8e5bba-abd6-4bb2-a76a-d2955a713d2b
INFO [11:33:27.709] [bbotk] Evaluating 1 configuration(s)
INFO [11:33:27.745] [mlr3] Running benchmark with 3 resampling iterations
INFO [11:33:27.752] [mlr3] Applying learner 'surv.coxtime' on task 'roche' (iter 3/3)
INFO [11:33:28.098] [mlr3] Applying learner 'surv.coxtime' on task 'roche' (iter 1/3)
INFO [11:33:28.477] [mlr3] Applying learner 'surv.coxtime' on task 'roche' (iter 2/3)
INFO [11:33:28.805] [mlr3] Finished benchmark
INFO [11:33:28.932] [bbotk] Result of batch 51:
INFO [11:33:28.935] [bbotk] dropout weight_decay learning_rate nodes k surv.harrell_c runtime_learners uhash
INFO [11:33:28.935] [bbotk] 0.2904862 0.4363978 0.6555903 23 3 0.4744703 1.01 3ed123be-9cfb-488c-9223-1f93871f283f
INFO [11:33:28.941] [bbotk] Evaluating 1 configuration(s)
INFO [11:33:28.978] [mlr3] Running benchmark with 3 resampling iterations
INFO [11:33:28.984] [mlr3] Applying learner 'surv.coxtime' on task 'roche' (iter 2/3)
Error in as.Distribution.matrix(cdf, fun = "cdf", decorators = c("CoreStatistics", :
'obj' must have column names
This happened PipeOp surv.coxtime.tuned's $train()

@VonKlotMD
Copy link

VonKlotMD commented Dec 30, 2021

Hello, sorry for the lack of a prior greeting/introduction :) I am currently working an a ML solution fora survival analysis on medical data. I am currently exploring mlr3 and am very excited about it. I based my analysis on your examples and on some reading I did in the mlr3book.
The above mentioned error also occurs when I execute resample instead of benchmark

rr <- lapply(learners,
function(x)
resample(task, x, resampling))

any ideas?

@RaphaelS1
Copy link
Author

Hi @Toxgondii and @pcstudy2019

Try running remotes::install_github('mlr-org/mlr3extralearners#147'), then restart your session and try again

@pcstudy2019
Copy link

Thank you very much for your reply, I have used' remotes::install_github('mlr-org/mlr3extralearners#147') 'and 'remotes::install_github('mlr-org/mlr3pipelines')' to update packages, the error ''obj' must have column names' was disappeared ,but new errors :
'Error in FUN(newX[, i], ...) :
identical(order(.x), seq(ncol(x))) is not TRUE
This happened PipeOp surv.coxtime.tuned's $predict()'
appeared.
Can you help me with this? thank you very much!!!

@RaphaelS1
Copy link
Author

Ah you might now need remotes::install_github('RaphaelS1/survivalmodels')

@pcstudy2019
Copy link

Thank you very much for your reply, I used remotes::install_github('RaphaelS1/survivalmodels') there still 'Error in FUN(newX[, i], ...) : /
/ identical(order(.x), seq(ncol(x))) is not TRUE/This happened PipeOp surv.loghaz.tuned's $train()' .
I did not know why? thank you very much!!!

@TKPath
Copy link

TKPath commented Jan 7, 2022

Hi Raphael, I'm encountering the same error message when following your very helpful walk-through on Towards Data Science.

I've updated the packages suggested (through remotes::install_github('mlr-org/mlr3extralearners#147') and remotes::install_github('mlr-org/mlr3proba')) and the error persists - see below (with some of the info lines removed to save space). This occurs on both my Windows 10 laptop on a Linux desktop. The other thing to add is that the error occurs after a variable number of iterations/learner applications.

If you have any suggestions I'd be very grateful! Cheers.

bm <- benchmark(design)
INFO [10:15:53.154] [mlr3] Running benchmark with 42 resampling iterations
INFO [10:15:53.160] [mlr3] Applying learner 'encode.scale.surv.loghaz.tuned' on task 'whas' (iter 2/3)
INFO [10:15:53.429] [bbotk] Starting to optimize 5 parameter(s) with '' and ' [n_evals=2, k=0]'
INFO [10:15:53.457] [bbotk] Evaluating 1 configuration(s)
INFO [10:15:53.501] [mlr3] Running benchmark with 1 resampling iterations
INFO [10:15:53.509] [mlr3] Applying learner 'surv.loghaz' on task 'whas' (iter 1/1)
INFO [10:15:54.146] [mlr3] Finished benchmark
INFO [10:15:54.334] [bbotk] Result of batch 1:
INFO [10:15:54.338] [bbotk] dropout weight_decay learning_rate nodes k surv.cindex runtime_learners uhash
INFO [10:15:54.338] [bbotk] 0.6049717 0.2523501 0.261086 16 3 0.5455201 0.611 acac11c1-df8d-440b-b8ac-bbf39d5d2f65
INFO [10:15:54.348] [bbotk] Evaluating 1 configuration(s)
INFO [10:15:54.399] [mlr3] Running benchmark with 1 resampling iterations
INFO [10:15:54.406] [mlr3] Applying learner 'surv.loghaz' on task 'whas' (iter 1/1)
INFO [10:15:55.151] [mlr3] Finished benchmark

lines removed

INFO [10:16:13.204] [bbotk] Result:
INFO [10:16:13.207] [bbotk] dropout weight_decay learning_rate nodes k learner_param_vals x_domain surv.cindex
INFO [10:16:13.207] [bbotk] 0.2107685 0.4213124 0.5045286 23 3 <list[8]> <list[4]> 0.76
INFO [10:16:13.880] [mlr3] Applying learner 'encode.scale.surv.loghaz.tuned' on task 'rats' (iter 1/3)
INFO [10:16:14.024] [bbotk] Starting to optimize 5 parameter(s) with '' and ' [n_evals=2, k=0]'
INFO [10:16:14.078] [bbotk] Evaluating 1 configuration(s)
INFO [10:16:14.157] [mlr3] Running benchmark with 1 resampling iterations
INFO [10:16:14.167] [mlr3] Applying learner 'surv.loghaz' on task 'rats' (iter 1/1)
Error in as.Distribution.matrix(cdf, fun = "cdf", decorators = c("CoreStatistics", :
'obj' must have column names
This happened PipeOp surv.loghaz.tuned's $train()

@RaphaelS1
Copy link
Author

Thanks! I know where the error is caused but annoyingly I need to fix it on a computer I don't have access to right now. I will try to fix it on Friday if not before. Sorry about this!

But, one fix for now is simply to install all packages to the versions that are listed in the tutorial:

  • mlr3benchmark (v0.1.2)
  • mlr3extralearners (v0.3.5)
  • mlr3pipelines (v0.3.4)
  • mlr3proba (v0.3.2)
  • mlr3tuning (v0.8.0)
  • survivalmodels (v0.1.7)

As well as distr6 (v1.5.6)

@JianGuoZhou3
Copy link

Hi @RaphaelS1, Thanks for your help, based on your suggestion for keeping all packages match related versions.

library(mlr3benchmark)

## create mlr3benchmark object
bma <- as.BenchmarkAggr(bm, 
                        measures = msrs(c("surv.cindex", "surv.graf")))

## run global Friedman test
bma$friedman_test()

This is worked.
image
BUT the later step is error

## load ggplot2 for autoplots
library(ggplot2)

## critical difference diagrams for IGS
autoplot(bma, meas = "graf", type = "cd", ratio = 1/3, p.value = 0.1)
Error in .__BenchmarkAggr__friedman_posthoc(self = self, private = private, : Package PMCMR required for post-hoc Friedman tests.
Traceback:

1. autoplot(bma, meas = "graf", type = "cd", ratio = 1/3, p.value = 0.1)
2. autoplot.BenchmarkAggr(bma, meas = "graf", type = "cd", ratio = 1/3, 
 .     p.value = 0.1)
3. .plot_critdiff_1(obj, meas, p.value, minimize, test, baseline, 
 .     ratio)
4. obj$.__enclos_env__$private$.crit_differences(meas, minimize, 
 .     p.value, baseline, test)
5. .__BenchmarkAggr__.crit_differences(self = self, private = private, 
 .     super = super, meas = meas, minimize = minimize, p.value = p.value, 
 .     baseline = baseline, test = test)
6. tryCatch(self$friedman_posthoc(meas, p.value), warning = function(w) stopf("Global Friedman test non-significant (p > %s), try type = 'mean' instead.", 
 .     p.value))
7. tryCatchList(expr, classes, parentenv, handlers)
8. tryCatchOne(expr, names, parentenv, handlers[[1L]])
9. doTryCatch(return(expr), name, parentenv, handler)
10. self$friedman_posthoc(meas, p.value)
11. .__BenchmarkAggr__friedman_posthoc(self = self, private = private, 
  .     super = super, meas = meas, p.value = p.value)
12. stop("Package PMCMR required for post-hoc Friedman tests.")

Please help me to check this error, best.

@RaphaelS1
Copy link
Author

RaphaelS1 commented Jan 22, 2022

HI @jianguozhouzunyimedicaluniversity as the error says "Package PMCMR required for post-hoc Friedman tests.
Traceback:". You need to install the missing package

install.packages("PMCMR")

@JianGuoZhou3
Copy link

hi @RaphaelS1 thanks for your help.
I installed the PMCMR package.
but still, have a new error, I guess we need to confirm the corrected version.

Error: 'posthoc.friedman.nemenyi.test.default'不再有用。
请用'PMCMRplus::frdAllPairsNemenyiTest'。
见help("Defunct"))和help("PMCMR-defunct")。
Traceback:

1. autoplot(bma, meas = "graf", type = "cd", ratio = 1/3, p.value = 0.1)
2. autoplot.BenchmarkAggr(bma, meas = "graf", type = "cd", ratio = 1/3, 
 .     p.value = 0.1)
3. .plot_critdiff_1(obj, meas, p.value, minimize, test, baseline, 
 .     ratio)
4. obj$.__enclos_env__$private$.crit_differences(meas, minimize, 
 .     p.value, baseline, test)
5. .__BenchmarkAggr__.crit_differences(self = self, private = private, 
 .     super = super, meas = meas, minimize = minimize, p.value = p.value, 
 .     baseline = baseline, test = test)
6. tryCatch(self$friedman_posthoc(meas, p.value), warning = function(w) stopf("Global Friedman test non-significant (p > %s), try type = 'mean' instead.", 
 .     p.value))
7. tryCatchList(expr, classes, parentenv, handlers)
8. tryCatchOne(expr, names, parentenv, handlers[[1L]])
9. doTryCatch(return(expr), name, parentenv, handler)
10. self$friedman_posthoc(meas, p.value)
11. .__BenchmarkAggr__friedman_posthoc(self = self, private = private, 
  .     super = super, meas = meas, p.value = p.value)
12. PMCMR::posthoc.friedman.nemenyi.test(form, data = private$.dt)
13. posthoc.friedman.nemenyi.test.formula(form, data = private$.dt)
14. do.call("posthoc.friedman.nemenyi.test", as.list(mf))
15. posthoc.friedman.nemenyi.test(c(0.242844438035742, 0.398032546165714, 
  . 0.218240499788987, 0.312985378868976, 0.325096957210135, 0.237175674655561, 
  . 0.201202907836198, 0.0562578614970671, 0.405899374957451, 0.0568933056522044, 
  . 0.164499978522519, 0.351042564190932, 0.0579915799203154, 0.0532148865857897
  . ), structure(c(1L, 2L, 3L, 4L, 5L, 6L, 7L, 1L, 2L, 3L, 4L, 5L, 
  . 6L, 7L), .Label = c("encode.scale.surv.coxtime.tuned", "encode.scale.surv.deephit.tuned", 
  . "encode.scale.surv.deepsurv.tuned", "encode.scale.surv.loghaz.tuned", 
  . "encode.scale.surv.pchazard.tuned", "kaplan", "coxph"), class = "factor"), 
  .     structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 
  .     2L, 2L), .Label = c("whas", "rats"), class = "factor"))
16. posthoc.friedman.nemenyi.test.default(c(0.242844438035742, 0.398032546165714, 
  . 0.218240499788987, 0.312985378868976, 0.325096957210135, 0.237175674655561, 
  . 0.201202907836198, 0.0562578614970671, 0.405899374957451, 0.0568933056522044, 
  . 0.164499978522519, 0.351042564190932, 0.0579915799203154, 0.0532148865857897
  . ), structure(c(1L, 2L, 3L, 4L, 5L, 6L, 7L, 1L, 2L, 3L, 4L, 5L, 
  . 6L, 7L), .Label = c("encode.scale.surv.coxtime.tuned", "encode.scale.surv.deephit.tuned", 
  . "encode.scale.surv.deepsurv.tuned", "encode.scale.surv.loghaz.tuned", 
  . "encode.scale.surv.pchazard.tuned", "kaplan", "coxph"), class = "factor"), 
  .     structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 
  .     2L, 2L), .Label = c("whas", "rats"), class = "factor"))
17. .Defunct(new = "PMCMRplus::frdAllPairsNemenyiTest", package = "PMCMR")

@TKPath
Copy link

TKPath commented Jan 25, 2022

Hi @RaphaelS1,
I've only just been able to revisit this and can confirm that installing the older versions of the packages you indicated (mlr3benchmark (v0.1.2), mlr3extralearners (v0.3.5), mlr3pipelines (v0.3.4), mlr3proba (v0.3.2), mlr3tuning (v0.8.0), survivalmodels (v0.1.7), distr6 (v1.5.6)) also works for me and allows the walkthrough to run. Thanks for your help here and for the package itself!

(I can also reproduced the error that @jianguozhouzunyimedicaluniversity has indicated -

autoplot(bma, meas = "graf", type = "cd", ratio = 1/3, p.value = 0.1)
Error: 'posthoc.friedman.nemenyi.test.default' is defunct.
Use 'PMCMRplus::frdAllPairsNemenyiTest' instead. )

@JianGuoZhou3
Copy link

Dear all, @RaphaelS1
I have another error...

## select holdout as the resampling strategy
resampling <- rsmp("cv", folds = 3)

## add KM and CPH
learners <- c(learners, lrns(c("surv.kaplan", "surv.coxph")))
design <- benchmark_grid(tasks, learners, resampling)
bm <- benchmark(design)
INFO  [22:14:42.058] [mlr3] Running benchmark with 42 resampling iterations 
INFO  [22:14:42.066] [mlr3] Applying learner 'encode.scale.surv.deepsurv.tuned' on task 'whas' (iter 2/3) 
INFO  [22:14:42.206] [bbotk] Starting to optimize 5 parameter(s) with '<OptimizerRandomSearch>' and '<TerminatorEvals> [n_evals=2, k=0]' 
INFO  [22:14:42.236] [bbotk] Evaluating 1 configuration(s) 
INFO  [22:14:42.281] [mlr3] Running benchmark with 1 resampling iterations 
INFO  [22:14:42.290] [mlr3] Applying learner 'surv.deepsurv' on task 'whas' (iter 1/1) 
Error in if (grepl("\\.$", v)) v <- paste0(v, "9000"): argument is of length zero
Traceback:

1. benchmark(design)
2. future.apply::future_mapply(workhorse, task = grid$task, learner = grid$learner, 
 .     resampling = grid$resampling, iteration = grid$iteration, 
 .     mode = grid$mode, MoreArgs = list(store_models = store_models, 
 .         lgr_threshold = lgr_threshold, pb = pb), SIMPLIFY = FALSE, 
 .     USE.NAMES = FALSE, future.globals = FALSE, future.scheduling = structure(TRUE, 
 .         ordering = "random"), future.packages = "mlr3", future.seed = TRUE, 
 .     future.stdout = future_stdout())
3. future_xapply(FUN = FUN, nX = nX, chunk_args = dots, MoreArgs = MoreArgs, 
 .     get_chunk = function(X, chunk) lapply(X, FUN = `[`, chunk), 
 .     expr = expr, envir = envir, future.envir = future.envir, 
 .     future.globals = future.globals, future.packages = future.packages, 
 .     future.scheduling = future.scheduling, future.chunk.size = future.chunk.size, 
 .     future.stdout = future.stdout, future.conditions = future.conditions, 
 .     future.seed = future.seed, future.lazy = future.lazy, future.label = future.label, 
 .     fcn_name = fcn_name, args_name = args_name, debug = debug)
4. value(fs)
5. value.list(fs)
6. resolve(y, result = TRUE, stdout = stdout, signal = signal, force = TRUE)
7. resolve.list(y, result = TRUE, stdout = stdout, signal = signal, 
 .     force = TRUE)
8. signalConditionsASAP(obj, resignal = FALSE, pos = ii)
9. signalConditions(obj, exclude = getOption("future.relay.immediate", 
 .     "immediateCondition"), resignal = resignal, ...)

@JianGuoZhou3
Copy link


R version 4.0.2 (2020-06-22)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: CentOS Linux 7 (Core)

Matrix products: default
BLAS:   /usr/local/lib64/R/lib/libRblas.so
LAPACK: /usr/local/lib64/R/lib/libRlapack.so

locale:
[1] C

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] devtools_2.4.3          usethis_2.1.5           mlr3pipelines_0.3.4    
[4] mlr3extralearners_0.3.5 mlr3tuning_0.8.0        paradox_0.7.1          
[7] mlr3proba_0.4.0         mlr3_0.13.1             survivalmodels_0.1.7   

loaded via a namespace (and not attached):
 [1] pkgload_1.2.4        jsonlite_1.7.2       splines_4.0.2       
 [4] here_1.0.1           assertthat_0.2.1     lgr_0.4.3           
 [7] set6_0.2.4           remotes_2.4.2        mlr3misc_0.10.0     
[10] sessioninfo_1.2.2    globals_0.14.0       bbotk_0.5.0         
[13] pillar_1.6.4         backports_1.4.1      lattice_0.20-45     
[16] glue_1.6.0           reticulate_1.22-9000 uuid_1.0-3          
[19] digest_0.6.29        checkmate_2.0.0      colorspace_2.0-2    
[22] htmltools_0.5.2      Matrix_1.4-0         pkgconfig_2.0.3     
[25] listenv_0.8.0        purrr_0.3.4          scales_1.1.1        
[28] processx_3.5.2       tibble_3.1.6         generics_0.1.1      
[31] ooplah_0.2.0         distr6_1.5.6         ggplot2_3.3.5       
[34] ellipsis_0.3.2       cachem_1.0.6         withr_2.4.3         
[37] repr_1.1.4           cli_3.1.1            survival_3.1-12     
[40] magrittr_2.0.1       crayon_1.4.2         ps_1.6.0            
[43] memoise_2.0.1        evaluate_0.14        fs_1.5.2            
[46] future_1.23.0        fansi_1.0.2          parallelly_1.30.0   
[49] pkgbuild_1.3.1       palmerpenguins_0.1.0 prettyunits_1.1.1   
[52] tools_4.0.2          data.table_1.14.2    lifecycle_1.0.1     
[55] munsell_0.5.0        callr_3.7.0          compiler_4.0.2      
[58] rlang_0.4.12         grid_4.0.2           pbdZMQ_0.3-6        
[61] IRkernel_1.3         rappdirs_0.3.3       R62S3_1.4.1         
[64] base64enc_0.1-3      testthat_3.1.1       gtable_0.3.0        
[67] codetools_0.2-18     curl_4.3.2           DBI_1.1.2           
[70] R6_2.5.1             dplyr_1.0.7          fastmap_1.1.0       
[73] future.apply_1.8.1   utf8_1.2.2           rprojroot_2.0.2     
[76] desc_1.4.0           parallel_4.0.2       IRdisplay_1.1       
[79] Rcpp_1.0.8           vctrs_0.3.8          png_0.1-7           
[82] tidyselect_1.1.1    

@yuanyuan102
Copy link

Dear all, @pcstudy2019 @RaphaelS1 ,
I updated all related packages but still got “ 'Error in FUN(newX[, i], ...) : /
/ identical(order(.x), seq(ncol(x))) is not TRUE” error, then I install the older versions based on towarddatascience, and got error message “Error in .return_fail(msg = msg, error_on_fail) :
Dependency of 'cdf len x' failed.”. Could you please help me? Thanks very much!!!

@RaphaelS1
Copy link
Author

Finally fixed! Please install the latest versions of mlr3extralearners, mlr3proba and survivalmodels from GitHub and latest versions of distr6 and param6 from CRAN:

install.packages(c("param6", "distr6"))
remotes::install_github("RaphaelS1/survivalmodels")
remotes::install_github("mlr-org/mlr3proba")
remotes::install_github("mlr-org/mlr3extralearners")

@yuanyuan102 @jianguozhouzunyimedicaluniversity @TKPath @pcstudy2019 @Toxgondii @millionj

@JianGuoZhou3
Copy link

JianGuoZhou3 commented Feb 19, 2022

@RaphaelS1 Thank you very much, Now it works in my MacBook.
image

@yuanyuan102
Copy link

@RaphaelS1 Thanks so much! The previous error disappeared after I updated the packages. But I just got a new error as here: "
UserWarning: Got event/censoring at start time. Should be removed! It is set s.t. it has no contribution to loss.
warnings.warn("""Got event/censoring at start time. Should be removed! It is set s.t. it has no contribution to loss.""")
INFO [02:01:53.669] [mlr3] Applying learner 'surv.pchazard' on task 'right_censored' (iter 3/3)
Error in FUN(newX[, i], ...) :
Survival probabilities must be (non-strictly) decreasing"
Could you please help me with that or shed some lights on it? Thanks in advance!

@RaphaelS1
Copy link
Author

Are you using your own data? It is saying that at t=0 there is an event (death or censoring). You should manually set that to a different time (e.g. 0.001) or remove the observation

@yuanyuan102
Copy link

Hi @RaphaelS1, Yes I am using my own data, I checked my data and all my time are >0. I wonder if you have any suggestions about the error message "Survival probabilities must be (non-strictly) decreasing" above. Thanks in advance!

@RaphaelS1
Copy link
Author

RaphaelS1 commented Feb 28, 2022

Ah apologies I saw the warning not the error. The error is saying that the predictions from your model are not valid survival probabilities. Unfortunately without access to your data I can't tell if the error is in my software, your code, or your data. I suggest you open an issue in the survivalmodels repo so we can figure it out properly

@lxtpvt
Copy link

lxtpvt commented May 6, 2023

Hi Raphael, first thanks for you work. It just what I needed. However, I'm encountering the error as follows. Do you know what's the reason? Thanks!

INFO [10:53:55.224] [bbotk] Evaluating 1 configuration(s)
INFO [10:53:55.254] [mlr3] Running benchmark with 1 resampling iterations
INFO [10:53:55.259] [mlr3] Applying learner 'surv.pchazard' on task 'lung' (iter 1/1)
INFO [10:53:55.300] [mlr3] Applying learner 'surv.kaplan' on task 'lung' (iter 1/3)
INFO [10:53:55.314] [mlr3] Applying learner 'surv.kaplan' on task 'lung' (iter 2/3)
INFO [10:53:55.328] [mlr3] Applying learner 'surv.kaplan' on task 'lung' (iter 3/3)
INFO [10:53:55.342] [mlr3] Applying learner 'surv.coxph' on task 'lung' (iter 1/3)
INFO [10:53:55.374] [mlr3] Applying learner 'surv.coxph' on task 'lung' (iter 2/3)
INFO [10:53:55.397] [mlr3] Applying learner 'surv.coxph' on task 'lung' (iter 3/3)
Error in py_call_impl(callable, dots$args, dots$keywords) :
RuntimeError: CUDA error: unspecified launch failure
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

@RaphaelS1
Copy link
Author

Hey @lxtpvt this is an error on the pyrthon side, I think possibly you might just need to restart your session and the error will clear itself. Does this issue help? There's lots of other similar ones associated with pytorch if you Google "RuntimeError: CUDA error: unspecified launch failure"

@lxtpvt
Copy link

lxtpvt commented May 8, 2023

Thanks Raphael, I fixed this problem by upgrading the ''mlr3proba" package. However, this is another error from the "coxtime" model as follows.

Error in py_call_impl(callable, dots$args, dots$keywords) :
AttributeError: 'Series' object has no attribute 'iteritems'

This happened PipeOp surv.coxtime.tuned's $train()

When I remove the "coxtime" model from the learners list, everything is OK.

@RaphaelS1
Copy link
Author

Great! Strange I've never seen that error before... Looks like it's in the underlying {pycox} implementation. Maybe try reinstalling pycox?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment