現在位置首頁 > 博碩士論文 > 詳目
  • 同意授權
論文中文名稱:應用深度攝影機之手勢比對門禁系統 [以論文名稱查詢館藏系統]
論文英文名稱:A Hand Gesture Recognition System for Access Control Using Depth Sensor [以論文名稱查詢館藏系統]
院校名稱:臺北科技大學
學院名稱:電資學院
系所名稱:資訊工程系研究所
畢業學年度:101
出版年度:102
中文姓名:邱莉雅
英文姓名:LI-YA CHIU
研究生學號:100598019
學位類別:碩士
語文別:中文
口試日期:2013-06-25
論文頁數:71
指導教授中文名:張厥煒
口試委員中文名:楊士萱;奚正寧
中文關鍵詞:深度攝影機手勢辨識門禁系統
英文關鍵詞:Depth sensorhand gesture recognitionAccess Control system
論文中文摘要:手為人體中最靈巧的肢體部位,根據個人習慣和生活背景的不同,每個人都有其獨特的手勢和動作方式,這些手勢動作會根據時間和空間的不同形成唯一的獨特性。本論文基於此想法,利用深度攝影機和Prime Sense NITE所提供的SDK快速地取得手部座標和手部區域影像資料,後續經由大量的手型特徵訓練和手勢座標運動軌跡的記錄,便可建立以多個手勢連續動作為主的手勢動作單元,如握拳、揮手等等,最後透過動作單元的順序排列,組合成一組密碼序列,後續使用者只需要依照此密碼序列做手勢,即可成功解鎖。
本論文之手勢動作比對門禁系統因為安全上的考量,所以使用較嚴謹的手勢動作比對,在此系統分為三部分,手型類別訓練、手勢鑰匙建立和開鎖。手型類別訓練主要是利用HOG的特徵抽取和SVM分類器進行手型類別資料的訓練;手勢鑰匙建立的部分主要是記錄手部座標的運動軌跡、手型變化的序列和密碼序列的編排;開鎖部份主要是辨識使用者的動作為資料庫中的哪個動作單元,而動作單元裡的運動軌跡序列是利用DTW演算法進行比對,手型序列部份則是使用NW演算法進行權重計算,最後透過動作單元的數量和順序,決定是否達到開鎖的條件。
論文英文摘要:According to different personal habits and lifestyles, everyone has unique hand gestures and movements which form the uniqueness on the basis of time and space differences. Based on this concept, the thesis uses a depth camera and the SDK provided by Prime Sense NITE to quickly collect hand coordinates and image data of hand area, and by further employing a lot of hand shape feature training and records of hand gesture coordinates’ trajectory, to create hand gesture units based on multiple continuous movements, such as making a fist and waving; subsequently, by arranging the order of the movement units, it forms a set of password sequence that allows users to successfully unlock by doing the hand gestures according to the password sequence. In consideration of security, the thesis adopted a more strict hand gesture matching in the access control system. The system has three aspects which are hand shape type training, setting of hand gesture keys, and unlocking. Hand shape type training mainly adopts HOG feature extraction and SVM classifier to proceed the training of hand shape type data. Setting of hand gesture keys mainly records the movement trajectory of hand coordinates, the sequence of hand shape changes, and encoding of password sequence. In unlocking, user’s movement is identified as a movement unit in the database, and the movement trajectory sequence of movement units is compared by adopting DTW algorithm. For hand shape sequence, NW algorithm is used for the weight computing. Finally, based on the amount and sequence of movement units it is decided whether the conditions of unlocking have been met.
論文目次:摘 要 i
ABSTRACT ii
誌 謝 iii
目 錄 iv
圖目錄 vii
表目錄 vi
第一章 緒論 1
1.1 研究動機 1
1.2 研究目的 2
1.3 研究範圍 3
1.4 研究限制 4
1.5 論文架構 5
第二章 相關研究與文獻探討 6
2.1 手勢追蹤相關文獻 6
2.1.1 傳統手勢追蹤 6
2.1.2 深度攝影機的手勢追蹤 7
2.2 手型辨識相關文獻 9
2.2.1 手型特徵抽取 9
2.2.2 特徵分類器 11
2.3 手勢動作比對的相關文獻 13
2.4 相關研究之討論 15
第三章 系統流程與架構 17
3.1 手勢鎖系統流程 17
3.1.1 手型類別訓練流程概述 18
3.1.2 手勢鑰匙建立流程概述 19
3.1.3 開鎖流程概述 20
3.2 手勢鎖系統架構 21
第四章 手型類別訓練 22
4.1 手部影像擷取 22
4.1.1 Kinect 深度影像串流 23
4.1.2 初始化手勢偵測 23
4.1.3 追蹤手部座標 24
4.1.4 手部區域影像擷取 25
4.1.5 手部影像正規化 27
4.2手型特徵抽取 29
4.3 特徵分類器訓練 31
4.4 建立手型類別識別碼 31
第五章 建立手勢鑰匙 34
5.1 軌跡序列輸出 34
5.1.1 定位手部初始座標 35
5.1.2 手部當前座標更新 35
5.1.3 手部座標正規化 36
5.1.4 緩衝區是否已滿 37
5.2手型序列輸出 38
5.3 動作單元建立 39
5.3.1 啟動動作單元建立 40
5.3.2 軌跡類別分類 40
5.3.3 手型樣板取樣 41
5.3.4 建立軌跡類別索引 44
5.4 密碼序列編排 45
第六章 開鎖機制 46
6.1 動作單元緩衝區 46
6.2 軌跡序列比對 47
6.3 手型樣板序列比對 48
6.4 權重分數計算 50
6.5 密碼序列比對 51
第七章 手勢鎖系統操作與使用介面 52
7.1 手勢鎖主介面 52
7.2 手型類別訓練介面 54
7.3 動作單元建立介面 55
7.4 使用者編輯介面 56
7.5 手勢鎖上鎖介面 57
第八章 實驗結果 58
8.1 實驗方法與環境 58
8.1.1 實驗一 距離對單一手型辨識率的影響 59
8.1.2實驗二 DTW門檻値對軌跡比對的影響 61
8.2 實驗結果與探討 62
8.2.1 實驗一 距離對單一手型辨識的實驗結果 62
8.2.2 實驗二DTW門檻値對軌跡比對的實驗結果 63
第九章 結論與未來展望 67
9.1 結論 67
9.2 未來展望 68
參考文獻 69
論文參考文獻:[1] L. Shi, Y. Wang and J. Li, “A real time vision-based hand gestures recognition system,” ISICA, vol. 6382, no.1, 2010, pp.349-358.
[2] Y. Wang and B. Yuan, “A novel approach for human face detection from color images under complex background,” Pattern Recognition, vol. 34, no. 10, 2001, pp. 1983-1992.
[3] X. Zhu, J. Yang, and A. Waibel, “Segmenting hands of arbitrary color,” Proceedings of the 4th IEEE Intel. Conference on Automatic Face and Gesture Recognition, Grenoble, 2000, pp. 446-453.
[4] Y. Zhu, G. Xu, and D.J. Kriegman, “A real-time approach to the spotting, representation, and recognition of hand gestures for human-computer interaction,” Computer Vision and Image Understanding, vol. 85, no.3, 2002, pp.189-208.
[5] T. Sarner, J. Weaver, and A. Pentland, “Real-time American sign language recognition using desk and wearable computer based video,” IEEE Trans. Pattern Anal. Mach. Intel., vol.20, no. 12, 1998, pp.1371-1375.
[6] L. Bretzner, I. Laptev, and T. Lindeberg, “Hand gesture recognition using multi-scale colour features, hierarchical modesl and particle filtering,” Proceedings of the 5th IEEE Intl. Conference on Automatic Face and Gesture Recognition, Washington, DC, USA, 2002, pp. 423-428.
[7] R. Y. Wang and J. Popovic, “Real-time hand-tracking with a color glove,” ACM Transactions on Graphics, vol. 28, no.3, 2009, pp.505-513.
[8] X. Liu and K. Fujimura, “Hand gesture recognition using depth data,” Proceedings of the 6th IEEE Intl. Conference on Automatic Face and Gesture Recognition, Seoul, Korea, 2004, pp.529-534.
[9] H. L. Lee, C.L. Hsu, C.C. Chen, J.S. Taur, and C. W. Tao, “Real-time hand gesture controlled mouse using Kinect,” Proceedings of the 25th CVGIP, Nantou, Taiwan, 2012, pp.189-197.
[10] C. Yang, Y. Jang, J. Beh, D. Han, and H. Ko, “Gesture recognition using hand tracking for contactless controller application,” Proceedings of the 2nd IEEE Intl. Conference on Consumer Electronics(ICCE), Las Vegas, NV, 2012, pp.297-298.
[11] M. Tang, “Recognizing hand gestures with Microsoft’s Kinect,” online available at: http://www.stanford.edu/class/ee368/Project_11/Reports/Tang_Hand_Gesture_Recognition.pdf, 2011.
[12] M. J. Jones and J. M. Regh. “Statistical color models with application to skin detection,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, 1999, pp.638-646.
[13] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 2001, pp. 511-518
[14] R. Lienhart and J. Maydt, “An extended set of Haar-like features for rapid object detection,” Proceedings of the 5th IEEE Intel. Conference on Image Processing, Washington, DC, USA, 2002, pp.900-903.
[15] Wikipedia, “Haar-like features,” online available at: http://en.wikipedia.org/wiki/Haar-like_features
[16] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 2005, pp. 886-893.
[17] W. T. Freeman and M. Roth, “Orientation histograms for hand gesture recognition,” Proceedings of IEEE Intl. Wkshp. on Automatic Face and Gesture Recognition, Zurich, 1995, pp. 296-304.
[18] A. Ramaoorthy, N. Vaswani, S. Chaudhury, and S. Banerjee, “Recognition of dynamic hand gestures,” Proceedings of Pattern Recognition Society, Grenoble, 2003, pp. 2069-2081.
[19] Wikipedia, “Hidden Markov model,” online available at: https://en.wikipedia.org/wiki/Hidden_Markov_model
[20] N. Dardas, Q. Chen, N. Georganas and E. Petriu “Hands and gesture recognition using bag-of-features and multi-class support vector machine,” Proceedings of the 9th IEEE International Symposium on Haptic Audio-Visual Environments and Games (HAVE), Phoenix, AZ, 2010, pp.1-5.
[21] C.W. Hsu and C.J. Lin, “A comparison of methods for multi-class support vector machines,” IEEE Transactions on Neural Networks, vol. 13, no. 2, 2002, pp.415-425.
[22] Wikipedia, “Support vector machine,” online available at: http://en.wikipedia.org/wiki/Support_vector_machine
[23] N. Nakatsu, Y. Kambayashi, and S. Yajima, “The string-to-string correction problem,” ACM, vol. 21, no. 1, 1974, pp. 168-173.
[24] E. Myers, “An O(ND) difference algorithm and its variations,” Algorithmica, vol.1, no. 2, 1986, pp. 251-266.
[25] Wikipedia, “Longest common subsequence problem,” online available at: http://en.wikipedia.org/wiki/Longest_common_subsequence_problem
[26] S. Sempena, N.U. Maulidevi, and P.R. Aryan, “Human action recognition using Dynamic Time Warping,” Proceedings of IEEE International Conference on Electrical Engineering and Informatics (ICEEI), Bandung, 2011, pp.1-5.
[27] Wikipedia, “Dynamic time warping,” online available at: http://en.wikipedia.org/wiki/Dynamic_time_warping
[28] J.A. Zondag, T. Gritti, and V. Jeanne, “Practical study on real-time hand detection,” Proceedings of the 3rd IEEE Intl. Conference on Affective Computing and Intelligent Interaction and Workshops, Amsterdam, 2009, pp.1-8.
論文全文使用權限:同意授權於2016-07-29起公開