A Deep Learning Signal-Based Approach to Fast Harmonic Imaging

被引:1
作者
Fouad, Mariam [1 ,3 ]
Abd El Ghany, Mohamed A. [3 ,4 ]
Huebner, Michael [2 ]
Schmitz, Georg [1 ]
机构
[1] Ruhr Univ Bochum, Bochum, Germany
[2] Brandenburg Tech Univ Cottbus, Senftenberg, Germany
[3] German Univ Cairo, Cairo, Egypt
[4] Tech Univ Darmstadt, Integrated Elect Syst Lab, Darmstadt, Germany
来源
INTERNATIONAL ULTRASONICS SYMPOSIUM (IEEE IUS 2021) | 2021年
关键词
Deep learning; Convolutional Autoencoders; Harmonic Imaging;
D O I
10.1109/IUS52206.2021.9593348
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
High resultant image contrast and quality have caused tissue harmonic imaging to become a valuable tool in ultrasound imaging. Amplitude Modulation (AM) is one of the most commonly used nonlinear pulsing schemes in tissue harmonic imaging. However, its need for at least two consecutive firings continues to be a hindering factor for a faster imaging process. In this work, deep learning concepts are exploited to introduce an alternative approach for ultrasound tissue harmonic imaging using a single firing. This is achieved by implementing an asymmetric convolutional autoencoder network to estimate the low-harmonic component content from a received echo signal. The network is trained on the full-amplitude harmonic content IQ echo comprising the network's input and its corresponding low-amplitude harmonic IQ echo representing the network's output. The proposed approach yielded high contrast harmonic images with comparable contrast to noise ratio and contrast ratio to the conventional checkerboard apertures amplitude modulation technique, yet at approximately three times the frame rate. Moreover, less clutter is observed in the proposed approach reconstructed images in contrast with the ground truth images. These results open the door for the implementation of harmonic imaging with comparable quality to the conventional AM techniques, yet with an increased framerate and reduced motion artifacts.
引用
收藏
页数:4
相关论文
共 25 条
  • [1] [Anonymous], 2015, International Conference on Learning Representations
  • [2] IMAGING METHODS FOR ULTRASOUND CONTRAST AGENTS
    Averkiou, Michalakis A.
    Bruce, Matthew F.
    Powers, Jeffry E.
    Sheeran, Paul S.
    Burns, Peter N.
    [J]. ULTRASOUND IN MEDICINE AND BIOLOGY, 2020, 46 (03) : 498 - 517
  • [3] DEEP LEARNING-BASED SEGMENTATION OF NODULES IN THYROID ULTRASOUND: IMPROVING PERFORMANCE BY UTILIZING MARKERS PRESENT IN THE IMAGES
    Buda, Mateusz
    Wildman-Tobriner, Benjamin
    Castor, Kerry
    Hoang, Jenny K.
    Mazurowski, Maciej A.
    [J]. ULTRASOUND IN MEDICINE AND BIOLOGY, 2020, 46 (02) : 415 - 421
  • [4] CORRECTION OF DISTORTION IN US IMAGES CAUSED BY SUBCUTANEOUS TISSUES - RESULTS IN TISSUE PHANTOMS AND HUMAN-SUBJECTS
    CARPENTER, DA
    KOSSOFF, G
    GRIFFITHS, KA
    [J]. RADIOLOGY, 1995, 195 (02) : 563 - 567
  • [5] Tumor Detection in Automated Breast Ultrasound Using 3-D CNN and Prioritized Candidate Aggregation
    Chiang, Tsung-Chen
    Huang, Yao-Sian
    Chen, Rong-Tai
    Huang, Chiun-Sheng
    Chang, Ruey-Feng
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2019, 38 (01) : 240 - 249
  • [6] Tissue harmonic imaging techniques: Physical principles and clinical applications
    Desser, TS
    Jeffrey, RB
    [J]. SEMINARS IN ULTRASOUND CT AND MRI, 2001, 22 (01) : 1 - 10
  • [7] A Deep Learning Framework for Single-Sided Sound Speed Inversion in Medical Ultrasound
    Feigin, Micha
    Freedman, Daniel
    Anthony, Brian W.
    [J]. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2020, 67 (04) : 1142 - 1151
  • [8] Fouad M., 2020, 2020 IEEE 44 ANN COM
  • [9] Deep Learning in Signal Linearization for Harmonic Imaging Application
    Fouad, Mariam
    Schmitz, Georg
    Huebner, Michael
    Abd El Ghany, Mohamed A.
    [J]. 2021 IEEE 18TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI), 2021, : 957 - 960
  • [10] Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1026 - 1034