現在位置首頁 > 博碩士論文 > 詳目
  • 同意授權
論文中文名稱:使用深度感測器之靜態手勢比對用於門禁系統 [以論文名稱查詢館藏系統]
論文英文名稱:Using Depth Sensor for Static Hand Gesture Recognition in Access Control System [以論文名稱查詢館藏系統]
院校名稱:臺北科技大學
學院名稱:電資學院
系所名稱:資訊工程系研究所
畢業學年度:103
畢業學期:第二學期
中文姓名:林鈺皓
英文姓名:Yu-Hao, Lin
研究生學號:102598008
學位類別:碩士
語文別:中文
口試日期:2015/07/10
指導教授中文名:張厥煒
指導教授英文名:Chueh-Wei, Chang
口試委員中文名:楊士萱;奚正寧
中文關鍵詞:深度感測器、手勢辨識、點雲函式庫、人機互動、門禁系統
英文關鍵詞:Depth Sensor, Hand Gesture Recognition, Point Cloud Library (PCL), Human-computer Interaction, Access Control System
論文中文摘要:近年來,便利且更為直覺的人機互動已然成為一項重要的發展趨勢,藉由非接觸的方式操作,也逐漸被大家所認可與接受。在居家安全的防護裡,「鎖」是不可或缺的一項元素,而理想的鎖需要具備防盜性、有唯一的解鎖方式,但又不能失去解鎖時的便利性。
本論文希望能夠創造出一套更方便、又具更高嚴密性的解鎖模式,且適合各種年齡層的使用者使用。透過微軟在2014年推出的Kinect for Windows v2深度感測器搭配SDK v2.0,快速取得人體骨架,追蹤其手部區域的影像資訊,利用手部遮罩圖疊合點雲影像以取得完整的手部點雲資訊,再利用點雲函式庫(Point Cloud Library, PCL)所提供的3D物件辨識方法對應的分群(Correspondence Grouping)演算法,進行手勢點雲的特徵抽取與比對,透過手勢點雲的順序排列,取得其對應的密碼序列,使用者在解鎖時依序操作出先前所設定的手勢鑰匙,系統即可成功解鎖。
本論文希望利用「手勢」結合「鎖」的概念,形成一種新的互動解鎖模式,藉由每個人手勢特徵及生活習慣的不同,所產生出來的個別獨特性,利用這個獨特性作為解鎖時的依據,由於手勢鑰匙不再需要攜帶額外的實體設備,亦可成功提高解鎖時的便利性。
論文英文摘要:In recent years, convenient and more intuitive human-computer interaction has become an important trend. The device for operating in touch-needless mode has been recognized and accepted gradually. Access control system for the house security is crucial in our daily life. An ideal access control system needs to be high security-level, specific unlocking access, and easy to use.
Based on these concepts, I hope to create a more convenient and high rigorous unlocking way for all-age users. The thesis uses a Kinect for Windows v2 depth sensor which was introduced in 2014 by Microsoft and the SDK v2.0 provided to quickly extract the skeletal points on a human body, collect the hand coordinates and image data of hand area, and by further mapping hand mask to image data of point cloud for extracting the complete point cloud data of hand area. Then, the features of hand were extracted and compared through Correspondence Grouping algorithm, a 3D object recognition method, which was provided by the Point Cloud Library (PCL). Finally, it forms a set of password sequence that allows users to successfully unlock by doing the hand gestures according to the password sequence.
The thesis proposes a concept of a new interaction unlock-mode that combines “gesture” and “lock”. According to different personal habits and lifestyle, everyone has unique hand gestures. Therefore, personal hand gesture can be used for unlocking. Since the hand gesture recognition in access control system doesn’t require actual keys, it also enhances the convenience of unlocking.
論文目次:摘 要 i
ABSTRACT ii
誌 謝 iv
目 錄 v
表目錄 vii
圖目錄 viii
第一章 緒論 1
1.1 研究動機 1
1.2 研究目的 2
1.3 研究範圍與限制 3
1.4 論文架構 4
第二章 相關研究與文獻探討 5
2.1 深度影像擷取 5
2.2 手勢追蹤相關文獻 7
2.2.1 傳統手勢追蹤 7
2.2.2 深度攝影機的手勢追蹤 8
2.3 點雲(Point Cloud)相關文獻 9
2.3.1 點雲模型介紹 9
2.3.2 點雲函式庫介紹 10
2.3.3 ICP(Iterative Closest Point)演算法 10
2.3.4 點雲於手勢辨識上之應用 11
第三章 系統架構與流程 13
3.1 手勢鎖設計理念 13
3.1.1 可參入假動作 14
3.1.2 手勢鎖的特性 15
3.1.3 手勢鎖的種類 17
3.2 手勢鎖系統架構 18
3.3 手勢鎖系統流程 19
3.3.1 手勢鑰匙建立流程概述 20
3.3.2 解鎖機制流程概述 20
第四章 個人化手勢鑰匙建立 22
4.1 手部偵測與影像擷取 22
4.1.1 Kinect深度影像串流 23
4.1.2 疊合深度影像及骨架資訊 23
4.1.3 取得手掌心座標 24
4.1.4 動態調整感興趣範圍 24
4.1.5 手部區域影像擷取 27
4.1.6 手部影像雜訊濾除 28
4.1.7 取得手勢點雲影像 29
4.2 手勢點雲特徵抽取 30
4.3 密碼序列編排 32
第五章 解鎖機制 34
5.1 手部偵測與影像擷取 34
5.2 空間中深度位置比對 34
5.3 手勢點雲特徵比對 36
5.4 密碼序列比對 40
第六章 系統操作與使用者介面 42
6.1 手勢鎖主畫面 42
6.2 手勢鎖建鎖畫面(學習模組) 42
6.3 手勢鎖上鎖畫面 45
6.4 手勢鎖解鎖畫面(辨識模組) 46
第七章 實驗結果 50
7.1 實驗方法與環境介紹 50
7.1.1 實驗一 正面觀看並加以模仿後的解鎖狀況 52
7.1.2 實驗二 手勢點雲間的相似度判定 53
7.2 實驗結果與探討 54
7.2.1 實驗一 正面觀看並加以模仿後的解鎖狀況實驗結果 54
7.2.2 實驗二 手勢點雲間的相似度判定實驗結果 56
第八章 結論與未來展望 57
8.1 結論 57
8.2 未來展望 58
參考文獻 59
論文參考文獻:[1] Microsoft, “Kinect for Windows”, http://www.kinectforwindows.org, 2014.
[2] E. Gutzeit, M. Vahl, Z. Zhou and U. V. Lukas, “Skin Cluster Tracking and Verification for Hand Gesture Recognition,” International Symposium on Image and Signal Processing and Analysis (ISPA), 2011, pp. 241-246.
[3] X. Zhu, J. Yang, and A. Waibel, “Segmenting Hands of Arbitrary Color,” Proceedings of the 4th IEEE Intel. Conference on Automatic Face and Gesture Recognition, Grenoble, 2000, pp. 446-453.
[4] Y. Zhu, G. Xu, and D.J. Kriegman, “A Real-Time Approach to the Spotting, Representation, and Recognition of Hand Gestures for Human-Computer Interaction,” Computer Vision and Image Understanding, vol. 85, no.3, 2002, pp. 189-208.
[5] R. Y. Wang and J. Popovic, “Real-Time Hand-Tracking with a Color Glove,” ACM Transactions on Graphics, vol. 28, no.3, 2009, pp. 505-513.
[6] K. Oka, Y. Sato, and H. Koike, “Real-Time Tracking of Multiple Fingertips and Gesture Recognition for Augmented Desk Interface Systems,” IEEE 5th International Conference on Automatic Face and Gesture Recognition, 2002, pp. 429-434.
[7] H. L. Lee, C.L. Hsu, C.C. Chen, J.S. Taur, and C. W. Tao, “Real-time hand gesture controlled mouse using Kinect,” Proceedings of the 25th CVGIP, Nantou, Taiwan, 2012, pp. 189-197.
[8] C. Yang, Y. Jang, J. Beh, D. Han, and H. Ko, “Gesture Recognition Using Hand Tracking for Contactless Controller Application,” Proceedings of the 2nd IEEE Intl. Conference on Consumer Electronics(ICCE), Las Vegas, NV, 2012, pp. 297-298.
[9] M. Tang, “Recognizing hand gestures with Microsoft’s Kinect,” online available at: http://www.stanford.edu/class/ee368/Project_11/Reports/Tang_Hand_Gesture_Recognition.pdf, 2011.
[10] M. J. Jones and J. M. Regh. “Statistical color models with application to skin detection,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, 1999, pp. 638-646.
[11] PCL - Point Cloud Library (PCL), http://pointclouds.org/about/.
[12] P. J. Besl and N. D. McKay, “A Method for Registration of 3-D Shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.14, no2, 1992, pp. 239-256.
[13] B. Apostol, C. R. Mihalache, and V. Manta, “Using Spin Images for Hand Gesture Recognition in 3D Point Clouds,” Proceedings of the 18th International Conference on System Theory, Control and Computing(ICSTCC), 2014, pp. 544-549.
[14] 賴彥成,基於深度影像與膚色之即時手勢辨識技術, 碩士論文,國立臺北科技大學資訊工程系研究所,臺北,2013,第18-19頁。
[15] 3D Object Recognition based on Correspondence Grouping, http://pointclouds.org/documentation/tutorials/correspondence_grouping.php#correspondence-grouping.
[16] F. Tombari and L. Di Stefano, “Object Recognition in 3D Scenes with Occlusions and Clutter by Hough Voting,” In Proc. 4th Pacific-Rim Symposium on Image and Video Technology (PSIVT 10), 2010.
[17] Signing In with a Picture Password, http://windows.microsoft.com/en-us/windows-8/personalize-pc-tutorial.
論文全文使用權限:同意授權於2017-07-30起公開