[Biomod-commits] model evaluation clarification

Robin Engler robin.engler at gmail.com
Wed Aug 17 23:52:05 CEST 2011

Hi Brenna,

> I understand that the "Evaluation" parameter referred to is the
> Cross.validation score.  I am assuming the "Calibration" parameter refers to
> the total.score.

Yes, that is correct.
Let's assume that you chose to split your data on a 30/70 percent and
that you repeat 10x the procedure.

"Cross.validation" = AUC/TSS computed on the 30% of the data that was
not used for model calibration, averaged over the 10 replicates.
"total.score" = AUC/TSS computed on the same 100% data that was used
for final model calibration. This values is not independent and should
never be used to evaluate a model unless you are looking to simply
explore your data without doing any projections with it.

Not sure that this answers your questions (haven't read the Coetzee et
al paper, sorry), but hope it helps nevertheless.

Robin Engler
Spatial Ecology Group
University of Lausanne

However, if so, I am confused by that, since my
> understanding is that the total score is based on either the "final model"
> (in this case, since it is not a rep) or the combination of the calibration
> & evaluation data sets (in the case of reps)...both of which use or include
> 100% of the data.
> Similarly, in a 2009 GEB article by Coetzee et al. using BIOMOD, they
> provided a Table (1) showing AUC and TSS statistics (mean, min and max) for
> "Calibration", "Evaluation" and "Original (Calibration + Evaluation)" for
> their models.  If someone could clarify how those categories correspond with
> the evaluation output in BIOMOD that would be very helpful.
> Thanks!
> Brenna Forester
> _______________________________________________
> Biomod-commits mailing list
> Biomod-commits at lists.r-forge.r-project.org
> https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/biomod-commits

More information about the Biomod-commits mailing list