[Biomod-commits] model evaluation clarification

Brenna Forester brenna.forester at duke.edu
Wed Aug 17 18:50:12 CEST 2011


Hello all -

On page 46 of the BIOMOD manual it says:

-----------------------
To display the predictive accuracy by Roc of the GLM for the second species modelled

> Evaluation.results.Roc$Sp277_PA1["GLM",]

    Cross.validation indepdt.data total.score Cutoff   Sensitivity  Specificity
GLM 0.996            0.889        0.997       547.452  96.852       96.906

As you can see the GLM has a high predictive accuracy on this particular species. The fairly
small decrease of accuracy from the Calibration to the Evaluation is an indication that the model
does not tend to over?t the data.
-----------------------

I understand that the "Evaluation" parameter referred to is the Cross.validation score.  I am assuming the "Calibration" parameter refers to the total.score.  However, if so, I am confused by that, since my understanding is that the total score is based on either the "final model" (in this case, since it is not a rep) or the combination of the calibration & evaluation data sets (in the case of reps)...both of which use or include 100% of the data.

Similarly, in a 2009 GEB article by Coetzee et al. using BIOMOD, they provided a Table (1) showing AUC and TSS statistics (mean, min and max) for "Calibration", "Evaluation" and "Original (Calibration + Evaluation)" for their models.  If someone could clarify how those categories correspond with the evaluation output in BIOMOD that would be very helpful.

Thanks!
Brenna Forester
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.r-forge.r-project.org/pipermail/biomod-commits/attachments/20110817/4aed4596/attachment.htm>


More information about the Biomod-commits mailing list