COMPARISON OF PIXEL AND OBJECT BASED CLASSIFICATION METHODS ON RAPIDEYE SATELLITE IMAGE

Comparison of pixel and object based classification methods on rapideye satellite image, Turkish Journal of Forest Science , ABSTRACT: The aim of this study is to evaluate the classification performances of land use/land cover (LULC) classification methods by comparing the results of pixel and object-based classification approaches on RapidEye satellite image. Pixel-based classification was carried out in ERDAS Imagine 10.4 using the Maximum Likelihood - supervised approach, whilst object-based classification was performed in e-Cognition Developer 64 using the nearest neighbour-supervised classification method. A LULC map of eight classes was created in both methods. While the accuracy for thematic LULC classes varied in both methods, the overall accuracy and kappa values of LULC maps for pixel and object-based classification methods were 58.39%-0.45 and 89.58%-0.86, respectively. Accuracy assessments and comparative results showed that object-based classification gives better results for thematic LULC classes as well as the overall accuracy of LULC maps. Even though pixel-based classification method was good at mapping many thematic classes, there were misclassifications between natural/semi-natural LULC classes. These results can be attributed to parameters set by users, such as the number of control points, etc. However, the capacity of object-based classification method to include auxiliary data (e.g. DEM, NDVI) increases the accuracy of LULC maps with high-resolution satellites.


INTRODUCTION
Up-to-date and accurate geospatial information on current and past natural resources is a necessity in landscape planning and management process. In this context, land use/land cover (LULC) provides valuable information for resource managers and landscape planners who are concerned about the characteristics and change of landscapes. The development of remote sensing (RS) and geographic information systems (GIS) have provided a useful mechanism to delineate, assess, and monitor LULC (McRoberts, & Tomppo, 2007). In recent years, there has been an increasing interest in detecting the change in LULC since it is directly linked to the complex and dynamic processes of natural and ecological systems (Atak Kesgin, 2020). The classification and change detection of LULC is only possible with spatiotemporal data kept at regular intervals (Turner et al., 1990).
There are several methods to extract information on LULC which have evolved from the basic visual interpretation of remotely sensed data into complicated methods. Within these, pixel and object-based classification methods are the most common ones for LULC classification. Broadly speaking, the pixel-based classification algorithms analyse the spectral properties of every pixel within a satellite image that is under consideration, but it does not consider the spatial/contextual information related to the neighbouring pixels (Richards, 1999;Weih, & Riggan, 2010). That might result in salt-and-pepper appearance from the confused pixels in the classification results in particular with high resolution satellites (Gao, & Mas, 2008;Lechner et al., 2012). On the other hand, object-based classification algorithms takes the form, textures and spectral information into account, and take into account both the spectral and spatial/contextual properties of pixels and use a segmentation process to group neighbouring pixels into meaningful areas (segments) (Blundell, & Opitz, 2006;Hay, & Castilla, 2006). The aim of this research is to compare the differences between the results of pixel and object based LULC classification methods on high resolution RapidEye satellite image. For this purpose, while pixel-based classification was performed in ERDAS Imagine 10.4 using supervised approach, object-based classification was carried out in e-Cognition Developer 64. A LULC map of eight classes namely artificial surfaces, agricultural areas, forests, maquis, pastures, roads, rivers and artificial water surfaces was created in both classification methods and accuracy assessments were completed in ArcGIS 10.5.1. Consequently, the accuracy for thematic LULC classes, overall accuracy and kappa values of LULC maps for pixel and object based LULC classification methods were obtained and compared in ArcGIS 10.5.1.

Data Source
The baseline data source for LULC mapping processes was the RapidEye satellite image dated 16 May 2017. Its sensor captures five multispectral bands: blue (440-510 nm), green (520-590 nm), red (600-700 nm), Red-Edge (690-730 nm) and Near-Infrared bands (760-850 nm) with a ground sampling resolution of 6.5 m, which is also enhanced to 5 m after additional image processing. In addition, Digital Elevation Model (DEM-30 m resolution), soil map (major soil groups-MSG), texture of each bands, Soil-Adjusted Vegetation Index (SAVI) and Normalized Difference Vegetation Index (NDVI) were used as auxiliary data. The texture layers represent the texture properties of each band of the RapidEye image. SAVI index was included in the layers to determine the soil reflections (by correcting the influence of soil brightness). Additionally, the NDVI-Re index was included to determine vegetation density, Red-Edge band was used since it is more sensitive to biophysical properties of plants (chlorophyll content, nitrogen content, leaf area index, etc.), and MSG data was added to the layers to capture the LULC classes in the study area according to their soil properties. Whilst we have used only the RapidEye satellite image for pixel-based classification, we have created and used a set of layers from these datasets to be included in the object-based classification. Whilst the pixel-based classification was conducted in ERDAS Imagine 10.4 software, eCognition Developer 64 was used for the object-based classification of the RapidEye satellite imagery with the help of auxiliary data.

Methods
An overview of the proposed methodology used for the evaluation of the LULC classification performances of each methods is given below (Figure 1).

a. Object-based Classification Method
With the availability of satellite images with high to very high spatial resolution, object-based classification methods have been came into use as an alternative to pixel-based methods for LULC mapping. In the object-based classification methods, the main processing units are segments, in other words image objects (Benz et al., 2004). The object-based classification method is based on the assessment of image objects that are represented by combining pixels with similar spectral values in a satellite image (Blaschke, 2010;Myint et al., 2011;Sabuncu, & Sunar, 2017). In this method, satellite images, and any auxiliary data if used, are segmented into objects according to characteristics such as structure, texture, size, colour and shape. The size of the objects in the image varies according to the parameters used in the analysis.
After the initial determination of potential Land Use/Land Cover (LULC) types in the study area, the application of object-based supervised classification consisted of three main stages: a) the segmentation of layers (using multiresolution segmentation and spectral difference segmentation algorithms), (b) selection of sample areas from segmented objects as the representative of different LULC classes and, (c) performing supervised classification based on the standard nearest neighbour classification method ( Figure 1). The object-based classification approach was applied by using eCognition Developer 64 software.
Segmentation stage: Yan (2003) defines the segmentation stage as the partitioning of a satellite image into meaningful objects based on a particular criterion of homogeneity. The initial stage of an object-based classification approach is the segmentation stage, or in other words the grouping of neighbouring pixels into meaningful, homogeneous patches. In this study, we used the multi-resolution segmentation algorithm, in which the pixels that make up a satellite image and auxiliary data are segmented into objects by region-merging technique to form meaningful objects. For the multi-resolution segmentation algorithm, scale, and homogeneity (composition of heterogeneity criterion) parameters are quite important for obtaining meaningful and satisfactory segments from the images. Here, where the larger the scale parameter, the larger the size of the obtained objects. Shape and density (compactness), homogeneity parameters, are related to combining/grouping of pixels. Homogeneity parameters take values between 0 and 1 (Pillai et al., 2005;Mathieu et al., 2007). After many trials, for this study, layers were segmented using a scale factor of 100, a shape parameter of 0.8, and a compactness value of 0.2. Afterwards, we also applied the spectral difference segmentation, a combining algorithm, to the segmented layers. In the spectral difference segmentation, neighbouring objects (objects with a maximum spectral difference) with a spectral average below the given threshold value are combined to obtain the resultant objects. In this study, a maximum spectral difference parameter value of 200 was used for the spectral difference segmentation. In addition to these, the each layer was assigned a weight for the image segmentation process. The layer weights for both segmentation algorithms are given in Table 1.
Selection of sample/training areas: For the supervised classification methods, initially users determine which LULC classes are available in the study area and desired to be obtained from the satellite image. Accordingly, the training areas are selected for each available and desired LULC classes in the study area. In this study, after an initial analysis of the reflections and the layer values in the segmented layers, the layer values and reflections of each band for each LULC class that desired to be mapped was determined. Here, it is important to note that some classes cannot be distinguished from an average value depending on site specific characteristics (e.g. soil properties, proportion of vegetative closure, aspect). So, in the case of such LULC classes, more sub-classes of available LULC types and training areas should be created and used to better distinguish available LULC classes. In our case, sub-classes were created to prevent the mix up of some LULC types in the study area, such as the plantations surrounded by agricultural areas, irrigated and non-irrigated agricultural parcels, and vineyards and orchards.
Nearest neighbour classifier/classification method: Nearest neighbour classifier is basically quite similar to supervised classification method in pixel-based classification approaches. Initially, the user should select segments as the sample/training areas. After the selection of training areas, thirty-eight main and sub-classes were created by applying the standard nearest neighbour operation to the training areas determined in the study areas. After the classification process, some LULC classes were mixed up with each other as a result of the spectral and spatial properties of the objects. In such cases, the manuel editing tool was used to increase the accuracy of the resultant LULC map.

b. Pixel-based Classification Method
Pixel-based classification approaches have been widely used in LULC classification to obtain LULC maps from satellite images with low and/or medium spatial resolutions using only the spectral information available in individual pixels. Since remotely sensed images consist of rows and columns of pixels, conventionally LULC mapping has been based on a per-pixel basis (Dean, & Smith, 2003). In the pixel-based classification approaches, satellite images are classified pixel by pixel by using a set of rules, defined by users, to define whether the pixels with approximate values can be grouped together to represent a LULC class (Elachi, & van Zyl, 2006). Pixel-based classification methods can be utilised by either supervised or unsupervised classifiers. For this study, a supervised Maximum Likelihood (ML) classifier was used to for the pixel-based classification. As in the object-based classification method, training areas were selected to represent each LULC class in the study area. For each LULC category, the training areas were selected from the ones homogenously distributed with common ground cover characteristics, as well as with similar RGB values. The ML classifier calculates the likelihood of a given pixel belongs to a particular LULC class on the assumption that the statistics of each band of a satellite image are normally distributed.

c. Accuracy Assessments
The accuracy assessment is a crucial stage in LULC mapping processes. The LULC maps obtained by different methods should include as many categories with the highest information detail and reasonable accuracy as possible. So, after completing the classification process, the accuracy of the resulting LULC map should be determined. In this study, accuracy assessments (overall, user's and producer's accuracies) were conducted in ArcGIS 10.5.1. The accuracy assessments of LULC maps were performed by comparing reference points and the different categories of the generated LULC maps. Classification accuracy was evaluated over an 800 random points which are proportional to the size of each thematic LULC class. The results were reported and evaluated an error matrix Result were shown and evaluated on an Error Matrix, which representing the outputs of overall, user's and producer's accuracies (Table 2 and 3). Here, while the total (overall) accuracy expresses how the classification result obtained is compatible with the ground truth; the producer's accuracy indicates the probability of a LULC category to be classified and mapped correctly, and the user's accuracy shows the probability of a point in the LULC map actually represents the correct LULC category on the ground. Finally, the kappa (κ) coefficient gives information on the accuracy and reliability of the classification by comparing the classification process with the reference data (Cohen, 1960;Congalton, & Green, 2008). Considering all the elements on an error matrix, the kappa (κ) coefficient provides a robust assessment for the accuracy of whole classification procedure.

Land Use/Land Cover Maps
The LULC maps of 8 thematic LULC classes was created in both classification methods including rivers, artificial water surfaces, agricultural areas, roads, forests, artificial surfaces, pastures, and maquis ( Figure 2).

Figure 2. LULC Maps of Pixel-based Classification (left) and Object-based Classification (right)
As can be seen in Figure 2, the object-based classification method performed better in creating much more homogenous objects in the LULC map by preventing the production of the saltand-pepper effect (Oruc et al., 2004;Villarreal, 2016).

Accuracy Assessments
As mentioned earlier, the classification accuracy of the remotely sensed satellite images refers to the determination of accuracy of the agreement between the selected reference information and the classified data. The accuracy of each of the thematic LULC classes varied in both classification methods. The overall accuracy and kappa values for object-based classification method were much higher than the pixel-based classification method (Sertel, & Alganci, 2016). Accuracy assessments and comparative evaluation of the findings showed that object-based classification yielded in far better results for each thematic LULC classes as well as the overall accuracy of LULC maps (Table 2 and 3).   Whilst the accuracy for thematic LULC classes varied in both methods, the overall accuracy and kappa values of LULC maps for pixel and object-based LULC classification methods were 58.39% -0.45 and 89.58% -0.86, respectively. In the pixel-based classification method, the highest user and producer accuracy were found in thematic LULC classes of rivers, artificial water surfaces and forests; while the lowest user and producer accuracy values were found in maquis, artificial surfaces, agricultural areas and pastures. In the object-based classification method, the classes with the lowest user and producer accuracy were pastures and maquis. Even though artificial surfaces and forests reported lower producer accuracy than other LULC classes, the accuracy of all other classes was over 90%. The object-based classification method significantly reported a low variety in the LULC based accuracies compared with the result by the pixel-based classification method. Therefore, the object-based classification method meets the requirement that we should have similar accuracies for the different LULC classes to obtain highly accurate LULC maps (Anderson et al., 1976).
According to the results of pixel-based classification, it is difficult to distinguish maquis and agricultural areas since olive groves and maquis have a similar average reflection values in the RapidEye satellite image. However, these classes gave higher accuracy results in object-based classification with the help of auxiliary data (in particular DEM, NDVI and SAVI) (Langanke et al., 2004;Moosavi et al., 2014;Xiaoliang et al., 2016). For example, the LULC classes of pastures and agricultural areas with similar reflection values got mixed in the pixel-based classification, whereas object-based classification performed very well for agricultural areas with the help of DEM and the soil map (major soil groups).
In the pixel-based classification method, low accuracy rates can be associated with user-defined parameters such as selection and number of training areas. At this point, the accuracy of classification can be increased by selecting more training areas which are capable to discriminate different thematic LULC classes. At the same time, since the method takes only reflection values into consideration, the accuracy can be improved by using different methods such as decision tree and support vector machines together with the auxiliary data. On the other hand, the object-based classification method increases the accuracy of classification because of its capability to include auxiliary data (e.g. DEM, NDVI, SAVI) that can help to identify different LULC classes in the landscape and has the potential to give better results, especially with high-resolution satellite imagery (especially in the classification of natural and seminatural areas).

CONCLUSION
The aim of this study was to evaluate the classification performances of pixel and object-based classification methods on RapidEye satellite image. Whilst the pixel-based classification process was performed in ERDAS Imagine 10.4 using the Maximum Likelihood-supervised approach, the object-based classification was carried out in e-Cognition Developer 64 using the nearest neighbour-supervised classification method. In this study, pixel and object-based classification methods was performed on the RapidEye satellite image. In general, it is difficult to obtain highly accurate LULC maps in heterogeneous landscapes. Object based classification method is a promising approach in this sense, if the most appropriate segmentation parameters are applied. Upon comparing the classification results, it was found that when the appropriate parameters are used together with the auxiliary data; it is possible to obtain high accuracy LULC maps with object-based classification method. The comparison of the accuracies of the object-based and pixel-based classification approaches revealed that the object-based classification method produced the best results with high-resolution images like the RapidEye satellite images.
In the meantime, in spite of the fact that the accuracy of the object-based classification method is higher with high-resolution images like the RapidEye satellite images; it is also known that the pixel-based classification method gives good results with low-resolution images. Pixelbased classification method has clear advantages when classifying some of the natural and semi-natural LULC classes. Therefore, methods that use a hybrid classification technique (supervised and unsupervised) can give more satisfactory results in the pixel-based classification method. Finally, it is important to note that the success of any classification method highly depends on the detailed knowledge of the satellite data, characteristics of the landscape as well as the expertise of the user.

AUTHOR CONTRIBUTIONS
Ebru Ersoy Tonyaloğlu: Designing the research, writing and reviewing the manuscript and supervising. Nurdan Erdoğan: Designing the research and reviewing the manuscript. Betül Çavdar: Obtaining the materials for the analysis and conducting the analysis. Kübra Kurtşan: Obtaining the materials for the analysis and conducting the analysis. Engin Nurlu: Designing the research and reviewing the manuscript and supervising.