Postprint version. Published in Ecography, Volume 36, Issue 9, September 1, 2013, pages 1017-1031.
The definitive version is available at https://doi.org/10.1111/j.1600-0587.2013.00107.x.
Species distribution models (SDMs) are commonly applied to predict species’ responses to anticipated global change, but lack of data from future time periods precludes assessment of their reliability. Instead, performance against test data in the same era is assumed to correlate with accuracy in the future. Moreover, high-confidence absence data is required for testing model accuracy but is often unavailable since a species may be present when undetected. Here we evaluate the performance of eight SDMs trained with historic (1900–1939) or modern (1970–2009) climate data and occurrence records for 18 mammalian species. Models were projected to the same or the opposing time period and evaluated with data obtained from surveys conducted by Joseph Grinnell and his colleagues in the Sierra Nevada of California from 1900 to 1939 and modern resurveys from 2003 to 2011. Occupancy modeling was used to confidently assign absences at test sites where species were undetected. SDMs were evaluated using species’ presences combined with this high-confidence absence (HCA) set, a low-confidence set in which non-detections were assumed to indicate absence (LCA), and randomly located ‘pseudoabsences’ (PSA). Model performance increased significantly with the quality of absences (mean AUC ± SE: 0.76 ± 0.01 for PSA, 0.79 ± 0.01 for LCA, and 0.81 ± 0.01 for HCA), and apparent differences between SDMs declined as the quality of test absences increased. Models projecting across time performed as well as when projecting within the same time period when assessed with threshold-independent metrics. However, accuracy of presence and absence predictions sometimes declined in cross-era projections. Although most variation in performance occurred among species, autecological traits were only weakly correlated with model accuracy. Our study indicates that a) the quality of evaluation data affects assessments of model performance; b) within-era performance correlates positively but unreliably with cross-era performance; and c) SDMs can be reliably but cautiously projected across time.