Semi-Supervised Learning of Geospatial Objects Through Multi-Modal Data Integration

被引:1
|
作者
Yang, Yi [1 ]
Newsam, Shawn [1 ]
机构
[1] Univ Calif, Elect Engn & Comp Sci, Merced, CA 95343 USA
基金
美国国家科学基金会;
关键词
FEATURES; IMAGERY;
D O I
10.1109/ICPR.2014.696
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We investigate how overhead imagery can be integrated with non-image geographic data to learn appearance models for geographic objects with minimal user supervision. While multi-modal data integration has been successfully applied in other domains, such as multimedia analysis, significant opportunity remains for similar treatment of geographic data due to location being a simple yet powerful key for associating varying data modalities, and the growing availability of data annotated with location information either explicitly or implicitly. We present a specific instantiation of the framework in which overhead imagery is combined with gazetteers to compensate for a recognized deficiency: most gazetteers are incomplete in that the same latitude/longitude point serves as the bounding coordinates of the spatial extent of the indexed objects. We use a hierarchical object appearance model to estimate the spatial extents of these known object instances. The estimated extents can then be used to revise the gazetteers. A particularly novel contribution of our work is a semi-supervised learning regime which incorporates weakly labelled training data, in the form of incomplete gazetteer entries, to improve the learned models and thus the spatial extent estimation.
引用
收藏
页码:4062 / 4067
页数:6
相关论文
共 50 条
  • [41] Enhancing heart failure diagnosis through multi-modal data integration and deep learning
    Yi Liu
    Dengao Li
    Jumin Zhao
    Yuchen Liang
    Multimedia Tools and Applications, 2024, 83 : 55259 - 55281
  • [42] Semi-supervised Multi-task Learning with Auxiliary data
    Liu, Bo
    Chen, Qihang
    Xiao, Yanshan
    Wang, Kai
    Liu, Junrui
    Huang, Ruiguang
    Li, Liangjiao
    INFORMATION SCIENCES, 2023, 626 : 626 - 639
  • [43] Multi-Level Cross-Modal Interactive-Network-Based Semi-Supervised Multi-Modal Ship Classification
    Song, Xin
    Chen, Zhikui
    Zhong, Fangming
    Gao, Jing
    Zhang, Jianning
    Li, Peng
    SENSORS, 2024, 24 (22)
  • [44] Data driven semi-supervised learning
    Balcan, Maria-Florina
    Sharma, Dravyansh
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [45] Interpretable multi-modal data integration
    Osorio, Daniel
    NATURE COMPUTATIONAL SCIENCE, 2022, 2 (01): : 8 - 9
  • [46] Interpretable multi-modal data integration
    Daniel Osorio
    Nature Computational Science, 2022, 2 : 8 - 9
  • [47] DualSign: Semi-Supervised Sign Language Production with Balanced Multi-Modal Multi-Task Dual Transformation
    Huang, Wencan
    Zhao, Zhou
    He, Jinzheng
    Zhang, Mingmin
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5486 - 5495
  • [48] Robust Semi-Supervised Learning With Multi-Consistency and Data Augmentation
    Guo, Jing-Ming
    Sun, Chi-Chia
    Chan, Kuan-Yu
    Liu, Chun-Yu
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 414 - 424
  • [49] Semi-Supervised Learning for Multi-View Data Classification and Visualization
    Ziraki, Najmeh
    Bosaghzadeh, Alireza
    Dornaika, Fadi
    INFORMATION, 2024, 15 (07)
  • [50] Semi-supervised Label Generation for 3D Multi-modal MRI Bone Tumor Segmentation
    Curto-Vilalta, Anna
    Schlossmacher, Benjamin
    Valle, Christina
    Gersing, Alexandra
    Neumann, Jan
    von Eisenhart-Rothe, Ruediger
    Rueckert, Daniel
    Hinterwimmer, Florian
    JOURNAL OF IMAGING INFORMATICS IN MEDICINE, 2025,