[Biomod-commits] trade-offs between optimizing SDMs vs ensembles
jason b mackenzie
jasonbmackenzie at gmail.com
Sat Feb 11 03:11:57 CET 2012
yesterday Wilfried attached a ref in press (Barbet etal 2012, link below) that suggests some very useful guidance for getting the best performance out of individual SDM techniques (eg. GBM, MARS, GAM, FDA etc), by varying pseudo absence data (eg. #s, selection strategies, stratification) and run repetitions. similarly, for any given technique, individual species may often require further optimization of pseudo absence data based upon the #s and/or biases (eg. spatial/climatic) associated with their presence data.
my question for the list is about how to balance trade-offs when the goal is to produce ensemble forecasts? by varying the starting data (eg. pseudo absences) for SDM techniques, individual methods are expected to perform better. unfortunately, if i understand correctly, the cost may be that its no longer warranted to directly compare evaluation scores (eg. TSS values) to identify the best performing techniques because they use different starting data?
the authors suggest that one solution is to create ensembles by first selecting the top performing method for each set of starting data (eg. GMB vs RF because these are optimized similarly), then combine the top performing representatives from each set of starting conditions with equal weighting. one of the features i was most excited about with BIOMOD, was the ability to produce ensembles with proportional weighting, where more value is assigned to the top performing methods. now it seems like there exists a tension between optimizing SDM methods vs ensemble forecasts, and i'm not sure which is more important? i want to incorporate the guidance above and avoid including poorly optimized SDMs in my ensembles, but it also feels unsatisfactory to give the same weight to different methods in an ensemble, so long as they perform above some ~arbitrary threshold. for now, i will go with this recommended approach of optimizing SDMs, then building ensembles with equal weighting.
out of curiosity, does anyone have any alternative suggestions about how i might still distinguish the top performing SDM methods, using different optimal starting data, in order to put more emphasis on the best models in my ensembles?
On Feb 10, 2012, at 6:11 PM, Wilfried Thuiller wrote:
> Dear Jason,
> I would recommend using "random" for pseudo absence instead of SRE or circle which tends to over fit a bit the data.
> I take the liberty to send you this paper in press with Methods in Ecology and Evolution that discuss the different strategies for selecting pseudo-absence:
> Hope it helps,
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Biomod-commits