Skip to contents

Assess semantic segmentation model using categorical raster grids (wall-to-wall reference data and predictions)

Usage

assessRaster(
  reference,
  predicted,
  multiclass = TRUE,
  mappings = levels(as.factor(reference)),
  decimals = 4
)

Arguments

reference

SpatRaster object of reference class codes/indices.

predicted

SpatRaster object of reference class codes/indices.

multiclass

TRUE or FALSE. If more than two classes are differentiated, use TRUE. If only two classes are differentiated and there are positive and background/negative classes, use FALSE. Default is TRUE.

mappings

Vector of class names. These must be in the same order as the class indices or class names so that they are correctly matched to the correct category. If no mappings are provided, then the factor levels or class indices are used by default. For a binary classification, it is assumed that the first class is "Background" and the second class is "Positive".

decimals

Number of decimal places to return for assessment metrics. Default is 4.

Value

List object containing the resulting metrics and ancillary information.

Details

This function generates a set of summary assessment metrics when provided reference and predicted classes. Results are returned as a list object. For multiclass assessment, the class names ($Classes), count of samples per class in the reference data ($referenceCounts), count of samples per class in the predictions ($predictionCounts), confusion matrix ($confusionMatrix), aggregated assessment metrics ($aggMetrics) (OA = overall accuracy, macroF1 = macro-averaged class aggregated F1-score, macroPA = macro-averaged class aggregated producer's accuracy or recall, and macroUA = macro-averaged class aggregated user's accuracy or precision), class-level user's accuracies or precisions ($userAccuracies), class-level producer's accuracies or recalls ($producerAccuracies), and class-level F1-scores ($F1Scores). For a binary case, the $Classes, $referenceCounts, $predictionCounts, and $confusionMatrix objects are also returned; however, the $aggMets object is replaced with $Mets, which stores the following metrics: overall accuracy, recall, precision, specificity, negative predictive value (NPV), and F1-score. For binary cases, the second class is assumed to be the positive case.

Examples

if(requireNamespace("terra", quietly = TRUE)){
require(torch)
require(terra)
#Multiclass example

#Generate example data as SpatRasters
ref <- terra::rast(matrix(sample(c(1, 2, 3), 625, replace=TRUE), nrow=25, ncol=25))
pred <- terra::rast(matrix(sample(c(1, 2, 3), 625, replace=TRUE), nrow=25, ncol=25))

#Calculate metrics
metsOut <- assessRaster(reference=ref,
                        predicted=pred,
                        multiclass=TRUE,
                        mappings=c("Class A", "Class B", "Class C"),
                        decimals=4)

print(metsOut)

#Binary example

#Generate example data as SpatRasters
ref <- terra::rast(matrix(sample(c(0, 1), 625, replace=TRUE), nrow=25, ncol=25))
pred <- terra::rast(matrix(sample(c(0, 1), 625, replace=TRUE), nrow=25, ncol=25))

#Calculate metrics
metsOut <- assessRaster(reference=ref,
                        predicted=pred,
                        multiclass=FALSE,
                        mappings=c("Background", "Positive"),
                        decimals=4)

print(metsOut)
}
#> Loading required package: torch
#> Warning: package 'torch' was built under R version 4.2.3
#> Loading required package: terra
#> Warning: package 'terra' was built under R version 4.2.3
#> terra 1.7.55
#> $Classes
#> [1] "Class A" "Class B" "Class C"
#> 
#> $referenceCounts
#> Class A Class B Class C 
#>     213     204     208 
#> 
#> $predictionCounts
#> Class A Class B Class C 
#>     221     202     202 
#> 
#> $confusionMatrix
#>          Reference
#> Predicted Class A Class B Class C
#>   Class A      79      66      76
#>   Class B      64      72      66
#>   Class C      70      66      66
#> 
#> $aggMetrics
#>       OA macroF1 macroPA macroUA
#> 1 0.3472   0.347   0.347  0.3469
#> 
#> $userAccuracies
#> Class A Class B Class C 
#>  0.3575  0.3564  0.3267 
#> 
#> $producerAccuracies
#> Class A Class B Class C 
#>  0.3709  0.3529  0.3173 
#> 
#> $f1Scores
#> Class A Class B Class C 
#>  0.3641  0.3547  0.3220 
#> 
#> $Classes
#> [1] "Background" "Positive"  
#> 
#> $referenceCounts
#> Negative Positive 
#>      326      299 
#> 
#> $predictionCounts
#> Negative Positive 
#>      302      323 
#> 
#> $ConfusionMatrix
#>           Reference
#> Predicted  Negative Positive
#>   Negative      150      152
#>   Positive      176      147
#> 
#> $Mets
#>              OA Recall Precision Specificity    NPV F1Score
#> Positive 0.4752 0.4916    0.4551      0.4601 0.4967  0.4727
#>