Skip to contents

Assess semantic segmentation model using point locations

Usage

assessPnts(
  reference,
  predicted,
  multiclass = TRUE,
  mappings = levels(as.factor(reference)),
  decimals = 4
)

Arguments

reference

Data frame column or vector of reference classes.

predicted

Data frame column or vector of predicted classes.

multiclass

TRUE or FALSE. If more than two classes are differentiated, use TRUE. If only two classes are differentiated and there are positive and background/negative classes, use FALSE. Default is TRUE.

mappings

Vector of class names. These must be in the same order as the factor levels so that they are correctly matched to the correct category. If no mappings are provided, then the factor levels are used by default. For a binary classification, it is assumed that the first class is "Background" and the second class is "Positive".

decimals

Number of decimal places to return for assessment metrics. Default is 4.

Value

List object containing the resulting metrics and ancillary information.

Details

This function generates a set of summary assessment metrics when provided reference and predicted classes. Results are returned as a list object. For multiclass assessment, the class names ($Classes), count of samples per class in the reference data ($referenceCounts), count of samples per class in the predictions ($predictionCounts), confusion matrix ($confusionMatrix), aggregated assessment metrics ($aggMetrics) (OA = overall accuracy, macroF1 = macro-averaged class aggregated F1-score, macroPA = macro-averaged class aggregated producer's accuracy or recall, amd macroUA = macro-averaged class aggregated user's accuracy or precision), class-level user's accuracies or precisions ($userAccuracies), class-level producer's accuracies or recalls ($producerAccuracies), and class-level F1-scores ($F1Scores). For a binary case, the $Classes, $referenceCounts, $predictionCounts, and $confusionMatrix objects are also returned; however, the $aggMets object is replaced with $Mets, which stores the following metrics: overall accuracy, recall, precision, specificity, negative predictive value (NPV), and F1-score. For binary cases, the second class is assumed to be the positive case.

Examples

#Multiclass example
#Generate example data as data frame of class predictions
inDF <- data.frame(ref = sample(c("Class A", "Class B", "Class C"), 1000, replace=TRUE),
pred = sample(c("Class A", "Class B", "Class C"), 1000, replace=TRUE))

#Calculate metrics
metsOut <- assessPnts(reference=inDF$ref,
                     predicted=inDF$pred,
                     multiclass=TRUE,
                     mappings = c("Class A", "Class B", "Class C"),
                     decimals=4)

print(metsOut)
#> $Classes
#> [1] "Class A" "Class B" "Class C"
#> 
#> $referenceCounts
#> Class A Class B Class C 
#>     326     363     311 
#> 
#> $predictionCounts
#> Class A Class B Class C 
#>     347     345     308 
#> 
#> $confusionMatrix
#>          Reference
#> Predicted Class A Class B Class C
#>   Class A     103     139     105
#>   Class B     116     123     106
#>   Class C     107     101     100
#> 
#> $aggMetrics
#>      OA macroF1 macroPA macroUA
#> 1 0.326  0.3257  0.3254   0.326
#> 
#> $userAccuracies
#> Class A Class B Class C 
#>  0.2968  0.3565  0.3247 
#> 
#> $producerAccuracies
#> Class A Class B Class C 
#>  0.3160  0.3388  0.3215 
#> 
#> $f1Scores
#> Class A Class B Class C 
#>  0.3061  0.3475  0.3231 
#> 

#Binary example

#Generate example data as data frame of class predictions
inDF <- data.frame(ref = sample(c("Background", "Positive"), 1000, replace=TRUE),
                  pred = sample(c("Background", "Positive"), 1000, replace=TRUE))

#Calculate metrics
metsOut <- assessPnts(reference=inDF$ref,
                     predicted=inDF$pred,
                     multiclass=FALSE,
                     mappings = c("Background", "Positive"),
                     decimals=4)

print(metsOut)
#> $Classes
#> [1] "Background" "Positive"  
#> 
#> $referenceCounts
#> Negative Positive 
#>      458      542 
#> 
#> $predictionCounts
#> Negative Positive 
#>      500      500 
#> 
#> $ConfusionMatrix
#>           Reference
#> Predicted  Negative Positive
#>   Negative      225      275
#>   Positive      233      267
#> 
#> $Mets
#>             OA Recall Precision Specificity  NPV F1Score
#> Positive 0.492 0.4926     0.534      0.4913 0.45  0.5125
#>