Diferenzas
Isto amosa as diferenzas entre a revisión seleccionada e a versión actual da páxina.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
hiperespectral:sae-cd [2018/01/16 17:09] – [Downloads] javier.lopez.fandino | hiperespectral:sae-cd [2018/01/16 17:53] – [Outputs] javier.lopez.fandino | ||
---|---|---|---|
Liña 9: | Liña 9: | ||
- | ===== Downloads ===== | + | ==== Input datasets |
- | + | ||
- | === Input datasets === | + | |
//All the images are avaiable in Matlab (.mat) format, among others. For further information see the readme in the files.// | //All the images are avaiable in Matlab (.mat) format, among others. For further information see the readme in the files.// | ||
Liña 17: | Liña 15: | ||
* [[https:// | * [[https:// | ||
- | === Results | + | ==== Experimental setup ==== |
- | == Experimental setup == | ||
* Codes were run in Ubuntu 14.04. | * Codes were run in Ubuntu 14.04. | ||
+ | |||
* Caffe framework 1.0.0-rc3 to perform the feature extraction by means of SAE. | * Caffe framework 1.0.0-rc3 to perform the feature extraction by means of SAE. | ||
- | We have configured the SAE to obtain 12 features. | + | * The SAE is configured |
- | This is done by two consecutive layers | + | * Two consecutive layers reduce the dimensionality of the data from 242 to 100 and from 100 to 12 features, |
- | 20% of the pixels | + | * The SAE is trained with 20% of the available |
- | The back-propagation process uses a Stochastic Gradient Descent (SGD) and the ’inv’ learning rate policy | + | * A batch of 64 pixels per iteration |
+ | * The iteration | ||
+ | | ||
- | * NWFE and PCA retaing | + | * NWFE and PCA used for comparision purposes retaining |
* ELM and SVM trained with 5% of the reference data available for each class. | * ELM and SVM trained with 5% of the reference data available for each class. | ||
- | Training samples randomly chosen in each run | + | * Training samples randomly chosen in each run. |
- | SVM classification carried out using the LIB-SVM library and the Gaussian radial basis function (RBF) | + | * 10 independent runs for each classifier. |
- | ELM configured with a sigmoidal activation function. | + | |
+ | | ||
+ | |||
+ | ==== Outputs ==== | ||
+ | |||
+ | === Image files === | ||
+ | |Reference data of changes |Binary CD map |Multiclass CD map| | ||
+ | |{{: | ||
+ | |||
+ | |||
+ | |||
+ | === Accuracy results === | ||
+ | ==Binary CD accuracies== | ||
+ | |Corect |Missed Alarms|False Alarms |Total Error| | ||
+ | |77020 (98.74%) |509 |471 |980 (1.25%) | | ||
+ | |||
+ | ==Multiclass CD accuracies== | ||
+ | |**Classifier** | ||
+ | | ELM | N=120 | PCA | 91.73 | 76.06 | 86.83 | | ||
+ | | ELM | N=120 | NWFE | 91.76 | 76.75 | 86.83 | | ||
+ | | ELM | N=60 | SAE | 95.19 | 90.45 | 92.31 | | ||
+ | | SVM | C: 64.0 γ: 32.0 | PCA | 91.46 | 71.16 | 86.46 | | ||
+ | | SVM | C: 32.0 γ: 16.0 | NWFE | 91.29 | 90.61 | 86.05 | | ||
+ | | SVM | C: 32.0 γ: 0.0625 | ||
+ | C: penalty term in the training of the SVM. γ: radius of the gaussian function of the SVM. N: Number of neurons in the hidden layer of the ELM. FE: Feature Extraction method. | ||
===== License ===== | ===== License ===== | ||
: | : |