Item request has been placed!
×
Item request cannot be made.
×
Processing Request
Manifold spatial clustering via asymmetric convolutional denoising autoencoder.
Item request has been placed!
×
Item request cannot be made.
×
Processing Request
- Additional Information
- Abstract:
Deep unsupervised learning extracts meaningful features from unlabeled images and simultaneously serves downstream tasks in computer vision. The basic process of deep clustering methods can include features learning and clustering assignment. To enhance the discriminative ability of the features and further improve the clustering performances, a new deep clustering method namely ACMEC (asymmetric convolutional denoising autoencoder with manifold spatial embedding clustering) is proposed. In this method, an asymmetric convolution denoising autoencoder is employed to extract visual features from images, and a manifold learning algorithm is used to obtain more distinctive features, followed by a Gaussian Mixture Model (GMM) is for clustering learning. The stability of feature space is guaranteed using separately training mechanism. In addition, reconstruction from noisy images enhances the robustness of feature networks. Experimental results on nine benchmark datasets demonstrate that the proposed ACMEC method can provide the better performances such as 0.979 clustering accuracy on the MNIST dataset and 0.668 on the fashion-MNIST dataset. ACMEC is a comparable competitor to the N2D (not too deep clustering) algorithm that is with 0.979 and 0.672 clustering accuracies respectively. Moreover, it is 16.1% higher than DEC algorithm on the fashion-MNIST dataset. [ABSTRACT FROM AUTHOR]
- Abstract:
Copyright of Journal of Intelligent & Fuzzy Systems is the property of IOS Press and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
No Comments.