3Describe-Creating Tangible AR (Augmented Reality) Objects using Depth Camera

被引:0
作者
Zhang, Kevin [1 ]
Ye, Mike Tianci [2 ]
Zhang, Chris Cheng [3 ]
Ni, Rongdi [4 ]
Liu, Yitong [5 ]
Xing, Anqi [6 ]
机构
[1] Univ British Columbia, Fac Appl Sci, Vancouver, BC, Canada
[2] St Georges Sch, Vancouver, BC, Canada
[3] Canada Youth Robot Club, Dept Res & Dev, Vancouver, BC, Canada
[4] Jilin Int Studies Univ, Dept Human Resource Management, Changchun, Jilin, Peoples R China
[5] Jiangsu Univ Sci & Technol, Dept Engn & Management, Zhangjiagang, Jiangsu, Peoples R China
[6] Penn State Univ, Coll Liberal Arts, State Coll, PA USA
来源
2023 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE, CSCI 2023 | 2023年
关键词
Augmented Reality (AR); computer vision; depth camera;
D O I
10.1109/CSCI62032.2023.00209
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The transition from in-person to online classes, accelerated by the global impact of Covid-19, has brought both accessibility and disengagement challenges. While online platforms facilitate learning for distant and international students, the loss of interactive elements diminishes the overall educational experience. This article proposes a novel solution inspired by MIT Professor Dr. Patrick Winston's concept of using "props" to enhance learning. Leveraging augmented reality (AR) technology, an application can be developed to introduce tangible AR elements into the online learning environment. The design and methodology outline the use of Python libraries, including OpenCV and Mediapipe, along with the Intel RealSense D435 depth camera. By employing hand-tracking techniques, real-world coordinates are deduced, allowing the creation of interactive AR objects. Trigonometry is utilized to convert 3D coordinates into 2D projections on the video screen, ensuring accurate representation. The visual perception of depth is achieved by subdividing lines, allowing for the dynamic interaction of virtual objects and real hands. The results and analysis section showcases the functionality of the developed application. A 3D cube or prism appears on-screen, responding to touch and rotation gestures. The collision detection algorithm, assuming a spherical bounding box, determines whether the cube is touched, altering its color and position accordingly. Limitations, such as the imprecise collision area for elongated shapes and potential aliasing issues, are discussed as sources of error. Looking forward, the discussion section explores future enhancements and applications. Incorporating advanced modeling tools like OpenGL or Wavefront could introduce more complex 3D models. Interactive features such as hand gestures for rotation or grabbing could further enrich the online learning experience. This project serves as a foundation for the development of interactive and engaging online learning methods, bridging the gap between physical and virtual educational environments.
引用
收藏
页码:1280 / 1283
页数:4
相关论文
共 7 条
[1]  
[Anonymous], Video Game Math: Collision Detection
[2]  
Fredericksen Eric, 2015, The ConversationFebruary 4
[3]  
Patrick W., 2019, How to speak
[4]  
Rs-Ar-Basic, Intel RealSenseTM Developer Documentation
[5]  
Smart Design Technology, 2022, Medium10 Mar.
[6]  
Zhang C., 24 INT C INT COMP IO, P116
[7]  
Zhang K., 21 INT C EMB SYST CY, P36