Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
thales:read_papers [2010/08/03 16:27]
tkorting
thales:read_papers [2010/10/14 16:02] (current)
tkorting
Line 2: Line 2:
  
 ===== Multi-Temporal Analysis =====  ===== Multi-Temporal Analysis ===== 
 +
 +==== Automatic Change Detection in Urban Areas Under a Scale-Space,​ Object-Oriented Classification Framework ====
 +//GEOBIA, 2010.//
 +
 +//Doxani, G., Karantzalos,​ K., Tsakiri-Strati,​ M.//
 +
 +The work presents and object-based classification framework to detect building constructions using two different times of remote sensing imagery. The method applies the Multivariate Alteration Detection (MAD), which is a correlation analysis between two groups of variables (images). The obtained MAD components are the difference of the corresponding input images, which depict the same area and were acquired at different dates. ​
 +
 +MAD components with values higher than a certain threshold |two times the standard deviation| correspond to changing pixels. Such pixels are then analysed further to infer changes regarding building constructions. They tested using single pixels, and also the spectral information obtained by segmentation in different scales.
 +
 +==== Trajectory of Dynamic Clusters in Image Time-Series ====
 +//​MultiTemp,​ 2003.//
 +
 +//Heas, P., Datcu, M., Giros, A.//
 +
 +This work performs two types of classification using image time-series. ​
 +The first one is called time-localized clustering, where each time-window in the series possesses a corresponding classification. Such classification neglects the information about the causalities between different times.
 +The second is called multi-temporal classification,​ where information from all images are used to perform the classification.
 +
 +The authors argue that it is possible to create a model which is capable of measuring the distance between the multi-temporal clusters and the time-localized clusters. With this model it is possible to trace the cluster evolutions. They calculate the cross entropy between each multi-temporal cluster and all the time-localized ones. The maximum entropy is used to match both types of clusters. ​
 +
  
 ==== Modeling Cyclic Change ==== ==== Modeling Cyclic Change ====
Line 8: Line 29:
 //Hornsby, K., Egenhofer, M.J., Hayes, P.// //Hornsby, K., Egenhofer, M.J., Hayes, P.//
  
-//The most general model of time in temporal logic represents time as an arbitrary, partially-ordered set [11, 12]The addition of axioms result ​in more refined models of time [11]. In the linear model, an axiom imposes total order on time, resulting in the linear advancement of time from the past, through the present, and to the futureThe branching model, also known as the “possible futures” model, describes time as being linear from the past to the present, where it then divides into several time-lines, each representing ​potential sequence ​of events. Few of these modelshowever, explicitly treat cycles. Although current information systems are useful for producing ​snapshot of a phenomenon at any one time, cyclically-varying phenomena require new solutions.+This paper describes ​formal way to represent cyclic eventsCertain changes ​in the environment occur cyclically, and current information systems do not deal with itThey proposed ​representation ​of 16 different relations between two events ​which occur in cyclesrepresenting they as arcs in circle.
  
-Temporal data models are commonly based on the primitive elements ​of either ​time points or time intervals. Time points typically describe ​precise ​time when an event occurredlinear model based on time points assumes a set of time points that are totally ordered [12]When precise information on time is unavailable, time intervals become useful constructsReasoning about temporal intervals addresses ​the problem ​that much of our temporal knowledge is relative and methods are needed that allow for significant imprecision in reasoning+According to the article, the most general model of time in temporal logic represents ​time as an arbitrary, partially-ordered setIn the **linear model**, there is total order on time, resulting in the linear advancement ​of time from the past, through the present, and to the futureThe **branching model** describes ​time as being linear from the past to the presentand then divides into several ​time-lines, each representing a potential sequence of eventsHowever, both models do not treat the fact that certain events or phenomena may be recurring. ​
  
-The linear or branching ​models ​of time do not treat the fact that certain events or phenomena may be recurring.//+Article definitions on temporal data models
 +  * time points: typically describe a precise time when an event occurred; 
 +  * time intervals: used when precise information on time is unavailableMuch of our temporal knowledge is relative and methods are needed that allow for significant imprecision in reasoning
  
 ==== Fast subsequence matching in time-series databases ==== ==== Fast subsequence matching in time-series databases ====
Line 19: Line 42:
 //​Faloutsos,​ C., Ranganathan,​ M., Manolopoulos,​ Y.// //​Faloutsos,​ C., Ranganathan,​ M., Manolopoulos,​ Y.//
  
-//The F-index works as follows: Given N sequences, all of the same length nwe apply the n-point ​Fourier Transform ​coefficients. +This work investigates a technique to match subsequences of values in time-series of values. A time-series is described ​as an 1-dimensional sequence ​of values. To avoid brute force search on every sequence for the subsequencesthey propose to apply Discrete ​Fourier Transform ​in the dataand keep only the first few coefficients,​ called features. With these features, ​rectangles ​in the feature space are created ​to join pieces of the values in a smaller representation.
-Discrete (DFT). apply the n-point DFT on the sequence Qwe keep the features, ​thus mapping Q into a f-dimensional +
-point ~t in feature space; (b) we use the F-index ​to retrieve all the points within distance c from qf;+
  
-We keep the first few (2-3) coefficients ​the features//+These rectangles are created using subsequences of the entire databases, with the same length of the query sequence. Then the search for matchings is performed in these new spacereducing processing time.
  
 ===== Change Detection =====  ===== Change Detection ===== 
Line 82: Line 103:
 ===== Data Mining ===== ===== Data Mining =====
  
-==== Maximizing Land Cover Classification Accuracies Produced by Decision ​Trees at Continental to Global Scales ​====+==== Decision ​tree regression for soft classification of remote sensing data ====
  
-//IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSINGVOL37, NO. 2, MARCH 1999//+//Remote Sensing of Environment2005.//
  
-// Friedl, M., BrodleyC., StrahlerA.//+//Xu, M., WatanachaturapornP., VarshneyP. Arora, M.//
  
-//boosting algorithms estimate multiple classifications ​in an iterative fashion using the base classification ​algorithm (in this case C5.0). At each iteration, ​weight is assigned to each training observationThose observations that were misclassified in the previous iteration are assigned a heavier weight in the next iterationthereby forcing the classification algorithm to concentrate on those observations that are more difficult to classify+This work proposes a decision tree with soft thresholds, which decompose the pixel into its class constituents ​in the form of class proportions. The outputs from soft classification ​provide ​set of fraction images that represent proportion of classes for each pixelThe use of decision trees is justified because remote sensing data often not follow Gaussian distributionan assumption made by several classifiers.
  
-We therefore conclude ​that boosting ​is a useful technique ​and should be used for land cover classifi- cation problems using remotely sensed data at continental to global scales. Secondadding features related to vegetation phenology produced little improvement to classification accuracy.+To build the class proportions,​ the authors suggest to create several decision trees, called forest. Every class proportion will derive one soft decision tree in the forest. Concluding, the authors argue that the tree construction ​is very fast and istherefore, suitable for classifying large remote sensing images.
  
-the use of geographic +==== Maximizing Land Cover Classification Accuracies Produced by Decision Trees at Continental ​to Global Scales ==== 
-position provides substantial predictive power to the decision tree classification algorithms.+ 
 +//IEEE TGRS, VOL. 37, NO. 2, MARCH 1999// 
 + 
 +// Friedl, M., Brodley, C., Strahler, A.//
  
-Stated another wayit is not surprising that geographic position has relatively high predictive power when classifying fairly coarse classes of vegetation at continental and global scales.+The authors evaluated the boosting technique using decision trees algorithm to land cover classification. The boosting technique estimate multiple classifications iteratively. At each iterationa weight ​is assigned to each training observation. Wrong classifications in the previous iteration get a bigger weight in the next iteration, "​forcing"​ the classification algorithm to concentrate on more difficult observations to classify
  
-geographic position should only be used as a secondary input feature used to discriminate between land cover classes that are spectrally similar, but geographically distinct.//+They concluded that boosting is a useful technique and should be used for land cover classification problems using remotely sensed data at continental to global scales. Besides, the use of geographic position provides substantial predictive power to the decision tree classification algorithms. The proved this assumption by classifying fairly coarse classes of vegetation at continental and global scales. However, ​geographic position should only be used as a secondary input feature used to discriminate between land cover classes that are spectrally similar, but geographically distinct.
  
  
Line 130: Line 154:
  
 Another suggestion is to use "​feature extraction",​ which means creating new features by applying combinations on already existent features. However, "​although extracted features have higher discrimination power, their physical meanings were hard to explain, which lowered the interpretability of classification trees"​. Another suggestion is to use "​feature extraction",​ which means creating new features by applying combinations on already existent features. However, "​although extracted features have higher discrimination power, their physical meanings were hard to explain, which lowered the interpretability of classification trees"​.
 +
 +===== Image Processing ===== 
 +
 +==== High spatial resolution spectral mixture analysis of urban reflectance ====
 +//Remote Sensing of Environment,​ 2003.//
 +
 +//Small, C.//
 +
 +The author explores the use of IKONOS imagery for intraurban classification. Medium spatial resolution sensors, such as TM and SPOT present abundance of mixed pixels. However, mixed pixels are problematic for statistical classification methods because most algorithms are based on the assumption of spectral homogeneity at pixel scale within a particular class of land cover, but urban areas provide examples of spectrally diverse, scale-dependent thematic classes containing large numbers of pixels that are spectrally indistinguishable from other land cover classes. The work proposes the use of spatial autocorrelation to quantify the characteristic scale lengths of urban reflectance within and among different cities. The question of the paper is if the consistency of the spectral mixing space for a urban areas can be provided by a simple three-component linear mixture model, to characterize the urban reflectance. Concluding, it is argued that spectral mixture analysis is preferable to "hard classification"​ for many applications because it accommodates the preponderance of mixed pixels observed in almost all multispectral imagery.
 +

Navigation