- 【Updated on May 12, 2025】 Integration of CiNii Dissertations and CiNii Books into CiNii Research
- Trial version of CiNii Research Knowledge Graph Search feature is available on CiNii Labs
- 【Updated on June 30, 2025】Suspension and deletion of data provided by Nikkei BP
- Regarding the recording of “Research Data” and “Evidence Data”
Predicting into unknown space? Estimating the area of applicability of spatial prediction models
-
- Hanna Meyer
- Institute of Landscape Ecology Westfälische Wilhelms‐Universität Münster Münster Germany
-
- Edzer Pebesma
- Institute for Geoinformatics Westfälische Wilhelms‐Universität Münster Münster Germany
Search this article
Description
<jats:title>Abstract</jats:title><jats:p> <jats:list> <jats:list-item><jats:p>Machine learning algorithms have become very popular for spatial mapping of the environment due to their ability to fit nonlinear and complex relationships. However, this ability comes with the disadvantage that they can only be applied to new data if these are similar to the training data. Since spatial mapping requires predictions to new geographic space which in many cases goes along with new predictor properties, a method to assess the area to which a prediction model can be reliably applied is required.</jats:p></jats:list-item> <jats:list-item><jats:p>Here, we suggest a methodology that delineates the ‘area of applicability’ (AOA) that we define as the area where we enabled the model to learn about relationships based on the training data, and where the estimated cross‐validation performance holds. We first propose a ‘dissimilarity index’ (DI) that is based on the minimum distance to the training data in the multidimensional predictor space, with predictors being weighted by their respective importance in the model. The AOA is then derived by applying a threshold which is the (outlier‐removed) maximum DI of the training data derived via cross‐validation. We further use the relationship between the DI and the cross‐validation performance to map the estimated performance of predictions. We illustrate the approach in a simulated case study chosen to mimic ecological realities and test the credibility by using a large set of simulated data.</jats:p></jats:list-item> <jats:list-item><jats:p>The simulation studies showed that the prediction error within the AOA is comparable to the cross‐validation error of the trained model, while the cross‐validation error does not apply outside the AOA. This applies to models being trained with randomly distributed training data, as well as when training data are clustered in space and where spatial cross‐validation is applied. Using the relationship between DI and cross‐validation performance showed potential to limit predictions to the area where a user‐defined performance applies.</jats:p></jats:list-item> <jats:list-item><jats:p>We suggest to add the AOA computation to the modeller's standard toolkit and to present predictions for the AOA only. We further suggest to report a map of DI‐dependent performance estimates alongside prediction maps and complementary to (cross‐)validation performance measures and the common uncertainty estimates.</jats:p></jats:list-item> </jats:list> </jats:p>
Journal
-
- Methods in Ecology and Evolution
-
Methods in Ecology and Evolution 12 (9), 1620-1633, 2021-07-26
Wiley
- Tweet
Details 詳細情報について
-
- CRID
- 1360576120152916224
-
- ISSN
- 2041210X
-
- Data Source
-
- Crossref