text
stringlengths 0
820
|
---|
Accepted as a paper at ICLR 2023 Workshop on Machine Learning for Remote Sensing
|
EVALUATION CHALLENGES FOR GEOSPATIAL ML
|
Esther Rolf
|
Harvard University
|
ABSTRACT
|
As geospatial machine learning models and maps derived from their predictions
|
are increasingly used for downstream analyses in science and policy, it is im-
|
perative to evaluate their accuracy and applicability. Geospatial machine learn-
|
ing has key distinctions from other learning paradigms, and as such, the correct
|
way to measure performance of spatial machine learning outputs has been a topic
|
of debate. In this paper, I delineate unique challenges of model evaluation for
|
geospatial machine learning with global or remotely sensed datasets, culminating
|
in concrete takeaways to improve evaluations of geospatial model performance.
|
1 M OTIVATION
|
Geospatial machine learning (ML), for example with remotely sensed data, is being used across
|
consequential domains, including public health (Nilsen et al., 2021; Draidi Areed et al., 2022) con-
|
servation (Sofaer et al., 2019), food security (Nakalembe, 2018), and wealth estimation (Jean et al.,
|
2016; Chi et al., 2022). By both their use and their very nature, geospatial predictions have a purpose
|
beyond model benchmarking; mapped data are to be read, scrutinized, and acted upon. Thus, it is
|
critical to rigorously and comprehensively evaluate how well a predicted map represents the state of
|
the world it is meant to reflect, or how well a spatial ML model performs across the many conditions
|
in which it might be used.
|
Unique structures in remotely sensed and geospatial data complicate or even invalidate use of tradi-
|
tional ML evaluation procedures. Partially as a result of misunderstandings of these complications,
|
the stated performance of several geospatial models and predictive maps has come into question
|
(Fourcade et al., 2018; Ploton et al., 2020). This in turn has sparked disagreement on what the
|
“right” evaluation procedure is. With respect to a certain set of spatial evaluation methods (described
|
in §4.1), one is jointly presented with the arguments that “spatial cross-validation is essential in pre-
|
venting overoptimistic model performance” (Meyer et al., 2019) and “spatial cross-validation meth-
|
ods have no theoretical underpinning and should not be used for assessing map accuracy” (Wadoux
|
et al., 2021). That both statements can simultaneously hold reflects the importance of using a diverse
|
set of evaluation methods tailored to the many ways in which a geospatial ML model might be used.
|
In this paper, I situate the challenges of geopsatial model evaluation in the perspective of an ML
|
researcher, synthesizing prior work across ecology, geology, statistics, and machine learning. I aim
|
in part to disentangle key factors that complicate effective evaluation of model and map performance.
|
First and foremost, evaluation procedures should be designed to measure as closely as possible the
|
quantity or phenomena they are intended to assess (§2). After the relevant performance measures
|
are established, considerations can be made about what is feasible with the available data (§3). With
|
all of this in mind, possible evaluation procedures (§4) can be compared and tailored to the task at
|
hand. Recognizing the interaction of these distinct but related steps exposes opportunities to improve
|
geospatial performance assessment, both in individual studies and more broadly (§5).
|
2 M AP ACCURACY AND MODEL PERFORMANCE : CONTRASTING VIEWS
|
Estimating accuracy indices and corresponding uncertainties of geospatial predictions is essential
|
to reporting geospatial ML performance (§2.1), especially when prediction maps will be used for
|
downstream analyses or policy decisions. At the same time, the potential value of a geospatial ML
|
model likely extends beyond that of a single mapped output (§2.2). Delineating the (possibly many)
|
facets of desired model and map use is key to measuring geospatial ML performance (§2.3).
|
1arXiv:2303.18087v1 [cs.LG] 31 Mar 2023
|
Accepted as a paper at ICLR 2023 Workshop on Machine Learning for Remote Sensing
|
2.1 M AP ACCURACY AS A POPULATION PARAMETER TO BE ESTIMATED
|
Establishing notation we will use throughout, let ˆy(ℓ)denote a model’s predicted value at location
|
ℓ, andy(ℓ)the reference, or “ground truth” value (which we assume can be measured). To calculate
|
amap accuracy index as a population parameter for accuracy index Fis to calculate A(D) =
|
F({(ˆy(ℓ),y(ℓ))}ℓ∈D)whereDis the target population of map use (e.g. all (lat, lon) pairs in a
|
global grid, or all administrative units in a set of countries). Examples of common Finclude root
|
mean squared error, and area under the ROC curve, among many others (Maxwell et al., 2021).
|
Typically, one only has a limited set of values yfor locations in an evaluation set ℓ∈S evalfrom which
|
to compute a statistic ˆA(Seval)to estimateA(D). Wadoux et al. (2021) discuss the value of using a
|
design-independent probability sample for design-based estimation of A(in contrast, model-based
|
estimation makes statistical assumptions about the data (Brus, 2021)). Here a design-independent
|
sample is one collected independently of the model training process. A probability sample is one
|
for which every location in Dhas a positive probability of appearing in Seval, and these probabilities
|
are known for all ℓ∈S eval(see, e.g. Lohr (2021)). Wadoux et al. (2021) emphasize that when Seval
|
is a design independent probability sample from population D, design-based inference can be used
|
to estimateA(D)with ˆA(Seval),regardless of the prediction model or distribution of training data .
|
Computing statistically valid estimates of map accuracy indices is clearly a key component of re-
|
porting overall geospatial ML model performance. It is often important to understand how accuracy
|
and uncertainty in predictions vary across sub-populations Dr1,Dr2...⊂D (such as administrative
|
regions or climate zones (Meyer & Pebesma, 2022)). If local accuracy indexesA(Dr1),A(Dr2)...
|
are low in certain sub-regions, this could expose concerns about fairness or model applicability.
|
2.2 M ODEL PERFORMANCE EXTENDS BEYOND MAP ACCURACY
|
Increasingly, geospatial ML models are designed with the goal of being used outside of the regions
|
where training labels are available. Models trained with globally available remotely sensed data
|
might be used to “fill in” spatial gaps common to other data modalities (§3.2). The goals of spatial
|
generalization ,spatial extrapolation orspatial domain adaption can take different forms: e.g.
|
applying a model trained with data from one region to a wholly new region, or using data from a few
|
clusters or subregions to extend predictions across the entire region. When spatial generalizability
|
is desired, performance should be assessed specifically with respect to this goal (§4).
|
While spatial generalization is a key component of performance for many geospatial models, it too
|
is just one facet of geospatial model performance. Proposed uses of geospatial ML models and
|
their outputs include estimation of natural or causal parameters (Proctor et al., 2023), and reducing
|
autocorrelation of prediction residuals in-sample (Song & Kim, 2022). Other important facets of
|
geospatial ML performance are model interpretability (Brenning, 2022) and usability, including the
|
resources required to train, deploy and maintain models (Rolf et al., 2021).
|
2.3 C ONTRASTING PERSPECTIVES ON PERFORMANCE ASSESSMENT
|
The differences between estimating map accuracy as a population parameter (§2.1) and assessing a
|
model’s performance in the conditions it is most likely to be used (§2.2) are central to one of the
|
discrepancies introduced in §1. Meyer et al. (2019); Ploton et al. (2020); Meyer & Pebesma (2022)
|
state concerns in light of numerous ecological studies applying non-spatial validation techniques
|
with the explicit purpose of spatial generalization. They rightly caution that when data exhibit spa-
|
tial correlation (§3.1), non-spatial validation methods will almost certainly over-estimate predictive
|
performance in these use cases. Wadoux et al. (2021), in turn, argue that performance metrics from
|
spatial validation methods will not necessarily tell you anything about Aas a population parameter.
|
A second discrepancy between these two perspectives hinges on what data is assumed to be avail-
|
able (or collectable). While there are some major instances of probability samples being collected
|
for evaluation of global-scale maps (Boschetti et al., 2016; Stehman et al., 2021), this is far from
|
standard standard in geospatial ML studies (Maxwell et al., 2021). More often, datasets are created
|
“by merging all data available from different sources” (Meyer & Pebesma, 2022). Whatever the
|
intended use of a geospatial model, the availability of and structures within geopsatial and remotely
|
sensed data must be contended with in order to reliably evaluate any sort of performance.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.