現在位置首頁 > 博碩士論文 > 詳目
論文中文名稱:適用於視障者辨識系統之特徵點過濾與選取方法 [以論文名稱查詢館藏系統]
論文英文名稱:A Feature Point Filtering and Choosing Method for the Visually Impaired Recognition System [以論文名稱查詢館藏系統]
院校名稱:臺北科技大學
學院名稱:電資學院
系所名稱:資訊工程系所
畢業學年度:104
畢業學期:第二學期
出版年度:105
中文姓名:呂木村
英文姓名:Mu-Cun, Lu
研究生學號:103598017
學位類別:碩士
語文別:中文
口試日期:2016/07/15
指導教授中文名:張厥煒
指導教授英文名:Chueh-Wei, Chang
口試委員中文名:奚正寧;楊士萱
口試委員英文名:Chueh-Wei, Chang;Chueh-Wei, Chang
中文關鍵詞:減少特徵點視障輔具影像辨識SURF
英文關鍵詞:Feature point reductionVisually impaired assistanceImage recognitionSURF
論文中文摘要:根據過去本研究團隊,曾對視障者日常生活等不便問題,提出一套穿戴式視覺輔助系統,提供視障者「路標提醒」、「路況警示」與「日用品識別」的視覺輔具。其中「日用品識別」功能,則是透過尺度及旋轉不變特徵SURF(Speeded-Up Robust Features)演算法進行辨識物品資訊。然而在抽取物品上特徵時,將無法得知該物品是否適合做為辨識物品,而一味將特徵點儲存到資料庫,導致某些物品在辨識時不易識別,或無法識別的情況。另外,當隨著物品數量增加,將會造成特徵點資料庫太過於龐大,而且其中也包含著重複性的特徵點,而影響辨識的效能。
因此,本論文提出一套特徵點篩選機制,能有效率減少特徵點數量,以下分為四個步驟:(1)利用特徵點密度排除不適合辨識的物品,(2)利用特徵點比對方式排除物品間和物品內相似的特徵點,(3)利用特徵點尺度大小挑選出具有代表性的特徵點,(4)利用特徵點位置分布使挑選的特徵點分布更加均勻,提升特徵點比對的穩定性。經過本實驗結果,在維持回傳率和準確率的狀況下,可排除50%以上特徵點數量,對於特徵點比對速度可提升30%。最後本論文設計特徵點篩選系統,提供明眼人對視障者建立辨識物品資料,使得在建立的過程中,獲取更多的訊息回饋。
論文英文摘要:In our previous work, we presented a wearable system for visually impaired’s inconvenient in their daily life. The system can recognize signboard, daily commodities and detect obstacle. The feature of daily commodities identification recognize object by SURF(Speeded-Up Robust Features) algorithm which has scale and rotation invariant. However, when the system extract feature points from the object, it doesn’t know if the object is suitable for recognition, but store the feature points straightly to the database. The situation cause some of objects can’t be recognized easily or unrecognized. Furthermore, when the number of objects increase, the database of the feature points is getting larger. The database include repeated feature point which will influence the performance of recognition.
Therefore, this paper propose a mechanism of feature point filter aim to reduce the number of feature points efficiently. The system is divided into four steps: (1) using the density of feature points exclude the objects which is unsuitable for recognition objects, (2) using feature matching method exclude inter and intra of similar feature points, (3) using the scale of feature points to select the representative feature points, (4) using position distribution of feature points to get more uniform result, also more stable matching result. According to the experimental results, the number of feature points can exclude more than 50% and the feature matching performance can improve 30% without sacrifice recall and precision rate. Finally, we design the system of feature point filter that provide the normal sighted person build object information for visually impaired. The system can get message feedback when we build object information.
論文目次:摘 要 i
ABSTRACT ii
誌 謝 iv
目 錄 v
表目錄 vii
圖目錄 viii
第一章 緒論 1
1.1 研究動機 1
1.2 研究目的 2
1.3 論文架構 3
第二章 相關研究與文獻探討 4
2.1 視障輔具相關研究 4
2.2 特徵點過濾 6
2.3 尺度不變特徵 9
第三章 系統架構 12
3.1 系統概述 12
3.2 特徵點篩選系統流程 14
3.3 特徵點辨識系統流程 15
第四章 特徵點篩選 16
4.1 特徵點密度 16
4.2 特徵點過濾-物品間相似特徵點 18
4.3 特徵點過濾-物品內相似特徵點 20
4.4 特徵點挑選-尺度大小 22
4.5 特徵點挑選-位置分布 26
第五章 特徵點辨識 30
5.1 建立特徵索引 30
5.2 特徵點比對 32
5.3 比對結果除錯 34
第六章 實驗結果 35
6.1 實驗與系統環境 35
6.2 實驗結果-特徵點密度 35
6.3 實驗結果-物品間特徵點過濾 37
6.4 實驗結果-物品內特徵點過濾 39
6.5 實驗結果-特徵點尺度大小挑選 41
6.6 實驗結果-特徵點尺度大小與位置分布挑選 43
6.7 實驗結果-整體效能分析 45
6.8 實驗結果-失敗案例 55
6.9 實驗結果-系統介面介紹 56
第七章 結論與未來展望 64
7.1 結論 64
7.2 未來展望 65
參考文獻 66
論文參考文獻:[1] 衛生福利部統計處,身心障礙人數統計,來源:http://www.mohw.gov.tw/cht/DOS/
[2] 張厥煒、黃翊庭,”穿戴式視障者定向輔助之視覺辨識系統,”2015資訊科技國際研討會暨民生電子論壇,pp. 1098-1103,台中 台灣,2015。
[3] 張厥煒、黃翊庭,”穿戴式視障者陸標用品視覺輔助辨識系統,”前瞻科技與管理,6:1,2016。
[4] A. Aladrén, G. López-Nicolás, Luis Puig, and Josechu J. Guerrero, “Navigation Assistance for the Visually Impaired Using RGB-D Sensor with Range Expansion,” IEEE Systems Journal, vol.99, 2014, pp. 1-11.
[5] H. Takizawa, S. Yamaguchi, M. Aoyagi, N. Ezaki and S. Mizuno, “Kinect Cane:An Assistive System for the Visually Impaired Based on Three-dimensional Object Recognition,” IEEE/SICE International Symposium on System Integration(SII), Fukuoka, Japan, 2012, pp. 740-745.
[6] D. Fehr, A. Cherian, R. Sivalingam, S. Nickolay, V. Morellas and N. Papanikolopoulos, “Compact Covariance Descriptors in 3D Point Clouds for Object Recognition,” IEEE International Conference on Robotics and Automation, Minnesota, USA, 2012, pp. 1793-1798.
[7] Y. Fujiwara, T. Okamoto and K. Kondo, “SIFT Feature Reduction Based on Feature Similarity of Repeated Patterns,” Intelligent Signal Processing and Communications Systems (ISPACS), 2013, pp. 311-314.
[8] 林家儒,運用顏色與SURF關鍵點特徵分類之快速多廣告看板計次系統,碩士論文,國立臺北科技大學,臺北,2010。
[9] 黃智澄,針對視障者定向輔助之視覺辨識系統,碩士論文,國立臺北科技大學,臺北,2014。
[10] W.-T. Chu, C.-H. Lin “Consumer Photo Management and Browsing Facilitated by Near-duplicate Detection with Feature Filtering,” Journal of Visual Communication and Image Representation, vol.22, no3, 2010, pp. 256-268.
[11] C.-C. Chang, C.-J. Lin “LIBSVM: a library for support vector machine,” https://www.csie.ntu.edu.tw/~cjlin/libsvm/, 2001.
[12] J.-S. Keum, H.-S. Lee and M. Hagiwara “Mean Shift-based SIFT Keypoint Filtering for Region-of-Interest Determination,” Soft Computing and Intelligent Systems (SCIS) and 13th International Symposium on Advanced Intelligent Systems (ISIS), 2012, pp. 266-271.
[13] K. Yuasa and T. Wada “Keypoint Reduction for Smart Image Retrieval,” IEEE International Symposium on Multimedia, 2013, pp. 351-358.
[14] T. Wada and Y. Mukai “Fast Keypoint Reduction for Image Retrieval by Accelerated Diverse Density Computation,” IEEE 15th International Conference on Data Mining Workshops, 2015, pp. 102-107.
[15] O. Maron and T. Lozano-Perez, “A Framework for Multiple-Instance Learning,” Advances in Neural Information Processing System 10, 1997, pp. 570-576.
[16] O. Maron and A. Ratan, “Multiple-Instance Learning for Natural Scene Classification,” Proceedings 15th International Conference on Machine Learning, 1998, pp. 341-349.
[17] H. P. Moravec, “Towards Automatic Visual Obstacle Avoidance,” Proceedings of the 5th International Joint Conference on Artificial Intelligence, 1977, pp. 584.
[18] C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” Proceedings of Alvey Vision Conference, 1988, pp. 147-151.
[19] D. G. Lowe, “Distinctive Image Features from Scale-invariant Keypoint,” International Journal of Computer Vision, vol.60, no.2, 2004, pp. 91-110.
[20] Y. Ke and R. Sukthankar, “PCA-SIFT: A More Distinctive Representation for Local Image Descriptors,” Proceedings Conference Computer Vision and Pater Recognition, 2004, pp. 511-517.
[21] H. Bay, T. Tuytelaar, and L. Van Gool, “SURF: Speeded Up Robust Feature,” Proceedings of European Conference on Computer Vision, 2006, pp. 404-417.
[22] H. Bay, B. Fasel and L. Van Gool, “Interactive Museum Guide: Fast and Robust Recognition of Museum Object,” Proceedings of the First International Workshop on Mobile Vision, 2006.
[23] U. Park, J. Park and A. K. Jain, “Robust Keypoint Detection Using Higher-Order Scale Space Derivatives: Application to Image Retrieval,” IEEE Signal Processing Letters, vol.21, no.8, 2014, pp. 962-965
[24] S. Ehsan, N. Kanwal, A. F. Clark and K. D. McDonald-Maier, “ An Algorithm for the Contextual Adaption of SURF Octave Selection With Good Matching Performance: Best Octaves,” IEEE Transactions on Image Processing, vol.21, no.1, 2012, pp. 297-304.
[25] C. Silpa-Anan and R. Hartley, “Optimised KD-trees for fast image descriptor matching,” Computer Vision and Pattern Recognition, 2008, pp. 1-8.
[26] K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, 2005, pp. 1615-1630.
論文全文使用權限:同意授權於2019-08-11起公開