Chaining approach for 3-D object envelope construction

被引:0
|
作者
Lichioui, A. [1 ,2 ,3 ,4 ,5 ]
Bennouna, A. [1 ,6 ,7 ,8 ]
Guennoun, Z. [1 ,9 ,10 ,11 ]
Hlou, L. [1 ,12 ]
Roukhe, A. [1 ,13 ,14 ]
机构
[1] LEC, Ecole Mohammadia d'Ingénieurs, B.P 765 Avenue Ibn Sina, Agdal Rabat, Morocco
[2] Département de Physique, Faculté des Sciences, B.P. 133, Kenitra, Morocco
[3] L.E.T.I. Dept. de Physique, Faculté des Sciences, B.P. 4010, Beni M'hamed Meknes, Morocco
[4] IXPT Institute, Rabat, Morocco
[5] Electrical Department, EMI School of Engineering, Rabat, Morocco
[6] University of Cairo, Liege, United Kingdom
[7] University of Astron, United Kingdom
[8] University of BATH, United Kingdom
[9] Electronics Elec. Montefiore Inst., ULG, Liege, Belgium
[10] EMI School, Rasbat, Morocco
[11] Centre for Communication Research, Bristol University, United Kingdom
[12] Univ. Ibn Tofaïl Kénitra, Morocco
[13] Sidi Mohamed Ben Abdelah University, Fes, Morocco
[14] Department of Physics, Faculty of Sciences, University Moulay Ismail, Meknes, Morocco
来源
International Journal of Modelling and Simulation | 2000年 / 20卷 / 04期
关键词
Cameras - Computer aided design;
D O I
10.1080/02286203.2000.11442174
中图分类号
学科分类号
摘要
In this paper, a new method for three-dimensional object envelope construction is proposed using the chaining approach. The object is seen by three cameras placed on an orthonormal basis (OXYZ) axes. From these projections, respective contours are extracted and then transformed into a list of chains presenting contour pixels of each projection. These chains are restructured to be used for arbitrary object shape envelope construction. The constructed 3-D object can be viewed on the screen using perspective or parallel projections under arbitrary viewpoint angles θ and φ. A one-pixel precision is reached for recovered objects.
引用
收藏
页码:329 / 335
相关论文
共 50 条
  • [21] A novel approach to the construction of 3-D ordered macrostructures with polyhedral particles
    Deng, Yonghui
    Liu, Chong
    Liu, Jia
    Zhang, Fan
    Yu, Ting
    Zhang, Fuqiang
    Gu, Dong
    Zhao, Dongyuan
    JOURNAL OF MATERIALS CHEMISTRY, 2008, 18 (04) : 408 - 415
  • [22] 3-D BIQUATERNIONIC ANALYTIC SIGNAL AND APPLICATION TO ENVELOPE DETECTION IN 3-D ULTRASOUND IMAGING
    Wang, Liang
    Girard, Patrick R.
    Bernard, Adeline
    Liu, Zhengjun
    Clarysse, Patrick
    Delachartre, Philippe
    2012 INTERNATIONAL CONFERENCE ON 3D IMAGING (IC3D), 2012,
  • [23] REAL-TIME APPROACH TO 3-D OBJECT TRACKING IN COMPLEX SCENES
    MATTEUCCI, P
    REGAZZONI, CS
    FORESTI, GL
    ELECTRONICS LETTERS, 1994, 30 (06) : 475 - 477
  • [24] A 3-D object recognition system based on a rapid 3-D vision system
    Choi, S
    Park, H
    Kim, S
    Park, S
    Won, S
    Jecing, H
    NEW TECHNOLOGIES FOR AUTOMATION OF METALLURGICAL INDUSTRY 2003, 2004, : 269 - 274
  • [25] SparseVoxNet: 3-D Object Recognition With Sparsely Aggregation of 3-D Dense Blocks
    Karambakhsh, Ahmad
    Sheng, Bin
    Li, Ping
    Li, Huating
    Kim, Jinman
    Jung, Younhyun
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (01) : 532 - 546
  • [26] 3-D HANet: A Flexible 3-D Heatmap Auxiliary Network for Object Detection
    Xia, Qiming
    Chen, Yidong
    Cai, Guorong
    Chen, Guikun
    Xie, Daoshun
    Su, Jinhe
    Wang, Zongyue
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [27] Contextual relativity of 3-D object representations
    Brettel, H.
    Gschwind, M.
    Rentschler, I.
    PERCEPTION, 2006, 35 : 36 - 36
  • [28] 3-D structures for generic object recognition
    Medioni, GG
    François, ARJ
    15TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 1, PROCEEDINGS: COMPUTER VISION AND IMAGE ANALYSIS, 2000, : 30 - 37
  • [29] TOP-DOWN CONSTRUCTION OF 3-D MECHANICAL OBJECT SHAPES FROM ENGINEERING DRAWINGS
    YOSHIURA, H
    FUJIMURA, K
    KUNII, TL
    COMPUTER, 1984, 17 (12) : 32 - 40
  • [30] Neural correlates of 3-D object learning
    Duhoux, S
    Gschwind, M
    Vuilleumier, P
    Rentschler, I
    Schwartz, S
    PERCEPTION, 2005, 34 : 116 - 116