You are on page 1of 7

Leah Bennett

Comparison of a 4 and 8-band World View-2 image of Acme, WA using Image


Segmentation and Unsupervised image Classification

ABSTRACT
The purpose of this study was to compare two unsupervised image classifications on two images

of Acme, Wa. The images are from the World View-2 sensor and one image has 8-bands while the other

has 4-bands. The images are run through an image segmentation that then splits them into 33 and 39

spectral classes respectively. These classes are then visually classified by land cover types. The final

accuracy of the 8- band image was 49.63% and the final accuracy of the 4-band image was 63.21%. The

results produced show a clear difference in how accurate the classification is based on the number of

bands used and that using less may lead to a higher accuracy.

METHODS
The initial image was taken by the World View-2 satellite on September 29th, 2010 and depicts an

area of the southern part of the South Fork of the Nooksack River and the town of Acme in Washington.

The image is a TIFF file that shows an area of about 4km by 4 km and has a resolution of 2 m. For the 8-

band image I used all bands, while for the 4-band image I only used the blue, green, red, and near-

infrared (bands 2,3,5,7)

The methods performed for the analysis were as followed from David Wallin (2016). Part one of

the analysis composed of preparing the TIFF image and creating the Image Segmentation. The TIFF has

to be projected so it could be correctly read and then I performed two image segmentations, each with a

minimum mapping unit of 25 pixels, and the similarities set to 10. One using all 8 bands and another

with just the 4 bands. I then ran an unsupervised image classification with 5 iterations for segmentation

of the 8 and 4-band images.


Leah Bennett

I edited both headers as to be able to complete the classification and changed the projection of

both to UTM zone 10N. I then assigned each spectral class in each image to an information class based

on visual information. For the 8- band image I used 9 information classes; farm buildings, road, crops,

pasture, recent clear cuts, deciduous forest, conifer forest, water, and rock/soil/gravel. For the 4-band

image I only used 7 information classes; excluding road and recent clear cuts from the list above. After I

assigned every spectral class to an information class I used the test data from a given reference dataset

from Wallin, 2016. From this I produced accuracy assessments for both the 8 and 4-band images.

Finally, I combined the two forest information classes to try and increase the overall final accuracy.

RESULTS

The 8- band image contained 33 spectral classes and were combined into 9 information classes

(Figure 1). The initial accuracy that I received from the classification of the 8-band image was only

49.63 % (Table 1). When the forest classes were consolidated, the accuracy went down to 43.1%. The

lowest producer accuracies in the analysis were for farm buildings and clear-cuts (0% for both). The

highest accuracy was for pasture (100%) and the accuracy for crops was high as well (93.75%) (Table

1). After consolidating the deciduous and conifer forest classes together, the forest accuracy was 35%,

higher than the deciduous accuracy (10.52 %) but lower than the conifer accuracy (41.93%) (Table 1).
Leah Bennett

Figure 1- This image shows the unsupervised classification of the 8-band image. It contained 9
information classes and combined 33 spectral classes.

Figure 2- This image shows a real color depiction of study area. The South Fork of the Nooksack River
can be seen going from the left of the image to the bottom and the town of Acme is on the left of the
river.
Leah Bennett

Table 1- This table shows the 9 information classes and the user accuracy and producer accuracy for
each in the 8-band image. User accuracy is the accuracy of how well an area can be classified or
omission error while producer accuracy is the accuracy of the pixel classification actually matching what
it is in real life or commission error (Campbell & Wynne, 2011). The overall accuracy was 49.63%.
Information Class User Accuracy (%) Producer Accuracy (%)
Farm Building 0 0
Road 11.1 11
Crops 40.5 93.75
Pasture 80 100
Clear-cuts 0 0
Deciduous Forest 66.66 10.52
Conifer Forest 65 41.93
Water 70 58.33
Rock/Soil/Gravel 15.62 23.80

Overall Accuracy= 49.63%

The 4-band image had 39 spectral classes that were combined into 7 information classes (Figure

3). The final accuracy that I received from the classification of the 4- band image was 63.21 % (Table 2).

When the two forest categories were consolidated this accuracy improved to 87.6%. Pasture had the

highest producer accuracy (100%) and the lowest was conifer forest (16.66%). The rest of the categories

all had fairly good accuracies that were all over 60% (Table 2).
Leah Bennett

Figure 3- This image shows the


unsupervised classification of the 4-
band image. It contained 7 information
classes and combined 39 spectral
classes.

Table 2- This
table shows the
7 information
classes and the
user accuracy
and producer
accuracy for each in the 4-band image.
User accuracy is the accuracy of how
well an area can be classified or
omission error while producer accuracy
is the accuracy of the pixel
classification actually matching what it
is in real life or commission error
(Campbell & Wynne, 2011). The overall accuracy was 63.21%.
Information Class User Accuracy (%) Producer Accuracy (%)
Farm Building 100 62.5
Crops 100 68.75
Pasture 88.8 100
Deciduous Forest 35.41 89.47
Conifer Forest 71.42 16.66
Water 100 77.77
Rock/Soil/Gravel 66.6 85.71

Overall Accuracy= 63.21%

DISCUSSION
When comparing the two images, it is clear to see that the 4-band image had a much higher

accuracy in classification. One explanation for this could be that the 8-band image had less spectral

classes and thus more room for misclassification. Another explanation is in the fact that for the 4-ban
Leah Bennett

image I assigned the spectral classes to less information classes and left less room for misclassification. I

used less classes because most of the roads in the area were probably dirt or gravel and this came as a

realization after the 8-band image had already been classified. Also, I did not use clear cuts as well after

seeing the results of the 8-band image I could see that the clear cuts were being over classified in areas

where they were not actually present. With this being said, many of the areas were misclassified, such

farm buildings and pasture, because their spectral signatures were similar to other areas such as soil and

water. The low accuracies for these could also come from a low amount of ground truth data points.

Much of the misclassification in the 8- band image came from crops, where large amounts of forest were

classified as crops (Figure 1). If I were to repeat this study, after I had classified both images, I would

have gone back and tried to classify the 8-band image again but using the exact same number of

information classes and the knowledge I gained from doing the 8-band first that I got to contribute to

classifying the 4-band image.

I thought that the combination of the two forest categories would help with any misclassification

of the two; because personally I find it very hard to visually distinguish a difference between the two

from an air photo. Yet the outcome was not what I expected and combining the tree classes in the 8-band

image actually lowered the accuracy while in the 4-band image it increased it by over 20%.

Overall, the classification was subjective because the classification of the original spectral

classes was done based on visual interpretation and this leads to a large amount of human error. A bias

occurs for people who have prior knowledge of the study area and landscape. The image segmentation

was also subjective because the values that I used for the similarity and minimum values were slightly

arbitrary. If I were to repeat the study, I would spend some time trying out different values and seeing

how they affected the classification and number of spectral classes produced.

The final two images produced showed a clear difference in using images with different band

numbers and showed that using an image with less bands produced higher accuracy in classifying land
Leah Bennett

cover types. It contributes to more of an understanding of the strengths and weaknesses of using

different types of sensors in remote sensing work; World View-2 (8 bands) v IKONOS-type sensors (4

bands).

WORKS CITED
Campbell, J., Wynne, R. 2011. Introduction to Remote Sensing. The Guilford Press

Wallin, D. 2016. Lab 3: Unsupervised Classification with ENVI. Retrieved


fromhttp://faculty.wwu.edu/wallin/envr442/ENVI/442_unsup_class_ENVI.html

You might also like