加载《Deep Manifold Embedding for Hyperspectral Image Cl》成功,点击此处阅读
首页 →文档下载

Deep Manifold Embedding for Hyperspectral Image Cl

以下为《Deep Manifold Embedding for Hyperspectral Image Cl》的无排版文字预览,完整内容请下载

IEEE LATEX, VOL X, 2019 1 Deep Manifold Embedding for Hyperspectral Image Classi?cation Zhiqiang Gong, Weidong Hu, Xiaoyong Du, Ping Zhong, Senior Member, IEEE, and Panhe Hu arXiv:1912.11264v1 [cs.CV] 24 Dec 2019 Abstract—Deep learning methods have played a more and more important role in hyperspectral image classi?cation. However, the general deep learning methods mainly take advantage of the information of sample itself or the pairwise information between samples while ignore the intrinsic data structure within the whole data. To tackle this problem, this work develops a novel deep manifold embedding method(DMEM) for hyperspectral image classi?cation. First, each class in the image is modelled as a speci?c nonlinear manifold and the geodesic distance is used to measure the correlation between the samples. Then, based on the hierarchical clustering, the manifold structure of the data can be captured and each nonlinear data manifold can be divided into several sub-classes. Finally, considering the distribution of each sub-class and the correlation between different subclasses, the DMEM is constructed to preserve the estimated geodesic distances on the data manifold between the learned low dimensional features of different samples. Experiments over three real-world hyperspectral image datasets have demonstrated the effectiveness of the proposed method. Index Terms—Manifold Embedding, Deep Learning, Convolutional Neural Networks (CNNs), Hyperspectral Image, Image classi?cation. I. INTRODUCTION Recently, hyperspectral images, which contain hundreds of spectral bands to characterize different materials, make it possible to discriminate different objects with the plentiful spectral information and have proven its important role in the literature of remote sensing and computer vision [8], [51], [52]. As an important hyperspectral data task, hyperspectral image classi?cation aims to assign the unique land-cover label to each pixel and is also the key technique in many realword applications, such as the urban planning [16], military applications [9], and others. However, hyperspectral image classi?cation is still a challenging task. There usually exists high nonlinearity of samples within each class. Therefore, how to effectively model and represent the samples of each class tends to be a dif?cult problem. Besides, great overlapping which occurs between the spectral channels from different classes in the hyperspectral image, multiplies the dif?culty to obtain discriminative features from the samples. Manuscript received XX, 2019; revised XX, 2019. This work was supported in part by the Natural Science Foundation of China under Grant *** and ***, in part by the Foundation for the Author of National Excellent Doctoral Dissertation of China (FANEDD) under Grant 201243, and in part by the Program for New Century Excellent Talents in University under Grant NECT-13-0164.(Corresponding author: Ping Zhong.) Z. Gong, W. Hu, X. Du, P. Zhong, and P. Hu are with the National Key Laboratory of Science and Technology on ATR, College of Electrical Science and Technology, National University of Defense Technology, Changsha, China, 410073. e-mail: (gongzhiqiang13@nudt.edu.cn, wdhu@nudt.edu.cn, xydu@nudt.edu.cn, zhongping@nudt.edu.cn, hupanhe13@nudt.edu.cn). Deep models have demonstrated their potential to model the nonlinearity of samples [23], [47], [14]. It can learn the model adaptively with the data information from the training samples and extract the difference between different classes. Due to the good performance, this work will take advantage of the deep model to extract features from the hyperspectral image. However, large amounts of training samples are required to guarantee a good performance of the deep model while there usually exists limited number of training samples in many computer vision tasks, especially in the literature of hyperspectral image classi?cation. Therefore, how to construct the training loss and fully utilize the data information with a certain number of training samples becomes the essential and key problem for effectively deep learning. The softmax loss, namely the softmax cross-entropy loss, is widely applied in prior works. It is formulated by the cross entropy between the posterior probability and the class label of each sample [38], which mainly takes advantage of the pointto-point information of each sample itself. Several variants which try to utilize the distance information between each sample pair or among each triplet have been proposed. These losses, such as the contrastive loss [10] and triplet loss [32] have made great strides in improving the representational ability of the CNN model. However, these prior losses, which we call samples-wise methods, mainly utilize the data information of sample itself or between samples and ignore the intrinsic data structure. In other words, these samples-wise methods only consider the commonly simple information and ignore the special intrinsic data structures of the hyperspectral image for the task at hand. Establishing a good model for the hyperspectral image is the premise of making use of the intrinsic data structure in the deep learning. Generally, the way to model the hyperspectral image can be broadly divided into two classes: parametric model and non-parametric model. Typical parametric models for hyperspectral image are usually constructed by the probabilistic model, such as the multi-variant Gaussian distribution. This class of model has been successfully applied in the literature of hyperspectral target detection [57] and anomaly detection [49]. Generally, parameter estimation with the training data is essential under these parametric models [43]. The other class of models usually makes use of the information provided by the training data directly, without modelling class data [18]. These nonparametric models are usually based on the mutual information and suitable for general cases since it does not assume anything about the shape of the class data density functions. In this work, manifold model, which plays an important role in the nonparametric models and can better IEEE LATEX, VOL X, 2019 2 ?t the high dimension of the hyperspectral image, will be applied to model the image for the current task. Manifold learning has been widely applied in many computer vision tasks, such as the face recognition [43], [44], image classi?cation [28], as well as in the literature of hyperspectral image [42], [29]. Generally, a data manifold follows the law of manifold distribution: in real-world applications, high-dimensional data of the same class usually draws close to a low dimensional manifold [21]. Therefore, hyperspectral images, which provide a dense spectral sampling at each pixel, possess good intrinsic manifold structure. This work aims to develop a novel manifold embedding method in deep learning (DMEM) for hyperspectral image classi?cation to make use of the data manifold structure and preserve the intrinsic data structure in the obtained low dimensional features. In addition to the law of manifold distribution, data manifold usually follows another property, namely the law of cluster distribution: The different subclasses of a certain class in the high-dimensional data correspond to different probability distributions on the manifold [22]. Furthermore, these probability distributions are far enough to distinguish these subclasses. Therefore, under the geodesic distances between the samples, we divide each class in the hyperspectral image into several sub-classes. Then, we develop the DMEM according to the following two principles. 1) Based on multi-statistical analysis, deep manifold embedding can be constructed to encourage the features from each sub-class to follow a certain distribution and further preserve the intrinsic structure in the low dimensional feature space. 2) Motivated by the idea of maximizing the “manifold margin” by the manifold discriminant ana 内容过长,仅展示头部和尾部部分文字预览,全文请查看图片预览。 autoencoder for hyperspectral image classi?cation. IEEE TGRS, 2019. [55] Y. Zhou and Y. Wei. Learning hierarchical spectral-spatial features for hyperspectral image classi?cation. IEEE CYB, 46(7):1667–1678, 2016. [56] B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen. Image reconstruction by domain-transform manifold learning. Nature, 555(7697):487–487, 2018. [57] Z. Zou and Z. Shi. Hierachical suppression method for hyperspectral target detection. IEEE TGRS, 54(1):330–342, 2015. [文章尾部最后500字内容到此结束,中间部分内容请查看底下的图片预览]请点击下方选择您需要的文档下载。

  1. 托福听力学科分类(完全修正版)
  2. 2 操练类游戏
  3. 最新修订高级成本管理会计理论与实践
  4. 战略成本管理外文翻译
  5. Unit2Wildlifeprotection-ReadingandThinking教案
  6. 【热点英语试题】新冠病毒肺炎的英语原创试题最全汇总(1)
  7. Unit6I’mgoingtostudy
  8. 索马鲁肽
  9. Full_Paper_Template
  10. Deep Manifold Embedding for Hyperspectral Image Cl
  11. Global efforts to curb coronavirus intensify as de
  12. 背完这100个句子

以上为《Deep Manifold Embedding for Hyperspectral Image Cl》的无排版文字预览,完整内容请下载

Deep Manifold Embedding for Hyperspectral Image Cl由用户“i_am_vivian”分享发布,转载请注明出处
XXXXX猜你喜欢
回顶部 | 首页 | 电脑版 | 举报反馈 更新时间2020-03-11 17:22:25
if(location.host!='wap.kao110.com'){location.href='http://wap.kao110.com/html/2d/15/6502.html'}ipt>if(location.host!='wap.kao110.com'){location.href='http://wap.kao110.com/html/2d/15/6502.html'}ipt>