    # Also in the Article

2.4. Classification Accuracy Assessment
This protocol is extracted from research article:
Land Use/Land Cover Change and Its Driving Forces in Shenkolla Watershed, South Central Ethiopia
ScientificWorldJournal, Feb 18, 2021;

Procedure

To perform accuracy assessment for the classified images, 100 random sample points (50 for each land use) in Arc GIS 10.3 were created for LU/LC mapping for the years 1973, 1995, and 2017, respectively. Ground control points recorded by using a hand-held GPS were used as the reference data to evaluate the results. In addition, reference points collected from the topographic map of 1973 and visual interpretation of the raw Landsat TM 1995 images as well as the personal knowledge of the study area and Google Earth images were used. The classification accuracy assessments of the resulting LU/LC layers were performed by examining the sample LU/LC class of the classified layer and the reference layer to discover similarities and differences. This means, the classified images were compared with the reference images by creating an error matrix . By comparing the datasets, the proportion of pixels correctly classified was estimated. Error matrices were plotted as cross tabulations of the classified data versus the reference data and were used to assess the classification accuracy. Afterwards, the overall accuracy, user's and producer's accuracies, and the kappa coefficient were then derived from the error matrices. Overall accuracy was calculated using the following formula , as shown in equation (1), while the Kappa coefficient was calculated using the formula  shown in equation (2).

where A is the overall accuracy, x is the number of correct values in the diagonals of the matrix, and y is the total number of values of a reference point.

The Kappa coefficient is a measure of overall agreement of a matrix. The Kappa coefficient takes also nondiagonal elements into account . The Kappa coefficient, which measures the difference between the actual agreement of classified map and chance agreement of random classifier compared to reference data, was calculated as follows:

where K is the Kappa coefficient, r is the number of rows in the matrix, xii is the number of observations in row i and column i, xi+ are the marginal totals of row i, x + i are the marginal totals of column i, and N is the total number of observations.

Note: The content above has been extracted from a research article, so it may not display correctly.

Q&A