Nanoporous Material Recognition via 3D Convolutional Neural Networks: Prediction of Adsorption Properties

被引:27
|
作者
Cho, Eun Hyun [1 ]
Lin, Li-Chiang [1 ]
机构
[1] Ohio State Univ, William G Lowrie Dept Chem & Biomol Engn, Columbus, OH 43210 USA
来源
JOURNAL OF PHYSICAL CHEMISTRY LETTERS | 2021年 / 12卷 / 09期
关键词
METAL-ORGANIC FRAMEWORKS; CARBON-DIOXIDE SEPARATION; POROUS MATERIALS; RECEPTIVE FIELDS; CO2; CHEMISTRY; DISCOVERY; GEOMETRY; STORAGE; MOFS;
D O I
10.1021/acs.jpclett.1c00293
中图分类号
O64 [物理化学(理论化学)、化学物理学];
学科分类号
070304 ; 081704 ;
摘要
Nanoporous materials can be effective adsorbents for various energy applications. Because of their abundant number, brute-force-based material discovery can, however, be challenging. Data-driven approaches can be advantageous for such purposes. In this study, we demonstrate for the first time the applicability of a 3D convolutional neural network (CNN) in material recognition for predicting adsorption properties. 2D CNNs have been widely applied to image recognition, where the CNN self-learns important features of images, without the need of handcrafting features that are subject to human bias. This study explores methane adsorption in zeolites as a case study, where similar to 6500 hypothetical zeolites are utilized to train/validate our designed CNN model. The CNN model offers highly accurate predictions, and the self-learned features resemble the channel and pore-like geometry of structures. This study demonstrates the extension of computer vision to materials science and paves the way for future studies such as carbon capture.
引用
收藏
页码:2279 / 2285
页数:7
相关论文
共 50 条
  • [1] ZeoNet: 3D convolutional neural networks for predicting adsorption in nanoporous zeolites
    Liu, Yachan
    Perez, Gustavo
    Cheng, Zezhou
    Sun, Aaron
    Hoover, Samuel C.
    Fan, Wei
    Maji, Subhransu
    Bai, Peng
    JOURNAL OF MATERIALS CHEMISTRY A, 2023, 11 (33) : 17570 - 17580
  • [2] Prediction of Energetic Material Properties from Electronic Structure Using 3D Convolutional Neural Networks
    Casey, Alex D.
    Son, Steven F.
    Bilionis, Ilias
    Barnes, Brian C.
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2020, 60 (10) : 4457 - 4473
  • [3] 3D Convolutional Neural Networks for Human Action Recognition
    Ji, Shuiwang
    Xu, Wei
    Yang, Ming
    Yu, Kai
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (01) : 221 - 231
  • [4] Hand Gesture Recognition with 3D Convolutional Neural Networks
    Molchanov, Pavlo
    Gupta, Shalini
    Kim, Kihwan
    Kautz, Jan
    2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2015,
  • [5] Asymmetric 3D Convolutional Neural Networks for action recognition
    Yang, Hao
    Yuan, Chunfeng
    Li, Bing
    Du, Yang
    Xing, Junliang
    Hu, Weiming
    Maybank, Stephen J.
    PATTERN RECOGNITION, 2019, 85 : 1 - 12
  • [6] 3D Local Convolutional Neural Networks for Gait Recognition
    Huang, Zhen
    Xue, Dixiu
    Shen, Xu
    Tian, Xinmei
    Li, Houqiang
    Huang, Jianqiang
    Hua, Xian-Sheng
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 14900 - 14909
  • [7] 3D Convolutional Neural Networks for Sperm Motility Prediction
    Goh, Voon Hueh
    Bin As'ari, Muhammad Amir
    Bin Ismail, Lukman Hakim
    2022 2ND INTERNATIONAL CONFERENCE ON INTELLIGENT CYBERNETICS TECHNOLOGY & APPLICATIONS (ICICYTA), 2022, : 174 - 179
  • [8] Background Subtraction via 3D Convolutional Neural Networks
    Gao, Yongqiang
    Cai, Huayue
    Zhang, Xiang
    Lan, Long
    Luo, Zhigang
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 1271 - 1276
  • [9] 3D Human Activity Recognition with Reconfigurable Convolutional Neural Networks
    Wang, Keze
    Wang, Xiaolong
    Lin, Liang
    Wang, Meng
    Zuo, Wangmeng
    PROCEEDINGS OF THE 2014 ACM CONFERENCE ON MULTIMEDIA (MM'14), 2014, : 97 - 106
  • [10] 3D Convolutional Neural Networks for Dynamic Sign Language Recognition
    Liang, Zhi-Jie
    Liao, Sheng-Bin
    Hu, Bing-Zhang
    COMPUTER JOURNAL, 2018, 61 (11): : 1724 - 1736