国产精品婷婷久久久久久,国产精品美女久久久浪潮av,草草国产,人妻精品久久无码专区精东影业

基于局部敏感和可鑒別的稀疏表示的視頻鏡頭分類技術(shù)研究.docx

  
約33頁DOCX格式手機(jī)打開展開

基于局部敏感和可鑒別的稀疏表示的視頻鏡頭分類技術(shù)研究,1.2萬字自己原創(chuàng)的畢業(yè)論文,已經(jīng)通過校內(nèi)系統(tǒng)檢測,重復(fù)率低,僅在本站獨(dú)家出售,大家放心下載使用摘要 隨著多媒體技術(shù)的迅速發(fā)展,多媒體數(shù)據(jù),尤其是視頻數(shù)據(jù),正以指數(shù)數(shù)量級增加。因此,如何快速高效地從海量的視頻中檢索出所需要的視頻顯得十分重要,一般都是在基于視頻分類的基...
編號:99-480901大小:780.90K
分類: 論文>計算機(jī)論文

內(nèi)容介紹

此文檔由會員 小丑88 發(fā)布

基于局部敏感和可鑒別的稀疏表示的視頻鏡頭分類技術(shù)研究

1.2萬字
自己原創(chuàng)的畢業(yè)論文,已經(jīng)通過校內(nèi)系統(tǒng)檢測,重復(fù)率低,僅在本站獨(dú)家出售,大家放心下載使用

摘要 隨著多媒體技術(shù)的迅速發(fā)展,多媒體數(shù)據(jù),尤其是視頻數(shù)據(jù),正以指數(shù)數(shù)量級增加。因此,如何快速高效地從海量的視頻中檢索出所需要的視頻顯得十分重要,一般都是在基于視頻分類的基礎(chǔ)上查找。傳統(tǒng)的視頻分類通常是基于標(biāo)題的,通過人工標(biāo)注來完成,這種方法效率固然很低。因而基于語義的視頻分類應(yīng)運(yùn)而生?;谡Z義視頻分類的基礎(chǔ)性工作是對視頻鏡頭加以分類,因而研究快速有效的視頻鏡頭分類方法顯得尤為重要。
目前的實踐已經(jīng)證明稀疏表示在視頻類方面有著優(yōu)越的性能。但傳統(tǒng)稀疏表示方法中,相似視頻特征未必能產(chǎn)生相近稀疏表示結(jié)果。因此,為了提高視頻分類的準(zhǔn)確性,本文提出一種基于局部敏感和可鑒別的稀疏表示視頻鏡頭分類算法,引入基于歐式距離的鑒別損失函數(shù),優(yōu)化構(gòu)建稀疏表示的字典,以進(jìn)一步提高視頻鏡頭分類準(zhǔn)確率。本文的具體內(nèi)容排如下:
(1)介紹了基于人工免疫的有序樣本聚類算法,在此基礎(chǔ)上研究了基于人工免疫有序聚類的視頻關(guān)鍵幀提取。
(2)介紹了加權(quán)分塊顏色直方圖、局部二值模式、灰度共生矩陣以及徑向Tchebichef矩的特點(diǎn)。在此基礎(chǔ)上本文采用了多特征融合的視頻特征提取方法。
(3)提出了基于局部敏感和可鑒別的稀疏表示的視頻鏡頭分類算法,通過對稀疏表示的條件加以約束,優(yōu)化構(gòu)建稀疏表示字典。實驗證明該算法有利于提高視頻鏡頭的分類準(zhǔn)確率。
(4)設(shè)計并開發(fā)了視頻鏡頭分類原型系統(tǒng)。采用面向?qū)ο蟮脑O(shè)計思想,設(shè)計實現(xiàn)了基于局部敏感和可鑒別稀疏表示的視頻鏡頭分類原型系統(tǒng),并驗證上述方法的有效性。
關(guān)鍵詞:視頻鏡頭分類,可鑒別,稀疏表示,局部敏感


Rearch on video shot classification based on local sensitive and identified sparse representation
Abstract With the rapid development of multimedia technology, particularly video data, is rapidly increasing.How to search the video we need is being more and more important.In general,the way we usually search the video is based on classification. Traditional video classification is usually based on the title, which is completed by manual annotation, and This method is inefficient.thus the aproach based on semantic video classification came into being.The foundation work of classification based on semantic video classification is to classify the video shot,so how to Improve the classification efficiency is being more and more important.
The current practice has proved sparse representation has a superior performance in terms of video category. However, the traditional sparse representation,the similar video feature may not be able to produce similar results based sparse representation. Therefore, in order to improve the accuracy of classification of video, This paper presents a classification algorithm based on said video footage and sensitive identification of local sparse, the differential loss function of the Euclidean distance is introducted into the local sensitive sparse representation algorithm to Optimal Structuring dictionary sparse representation and To further improve the classification accuracy of video camera.
The specific content of this assay is as follow:
(1)Introduces artificial immune clustering algorithm based on ordered samples, based on this study and orderly frame extraction based on artificial immune clustering video key.
(2)Introduced the weighted block color histogram, local binary pattern, GLCM and radial Tchebichef moments characteristics. On the basis of this paper, a video feature extraction method for multi-feature fusion.
(3)Video shot classification algorithm is proposed based on local sensitive and can identify the sparse representation, to be bound by the conditions of the sparse representation, optimizing build sparse representation dictionary. Experimental results show that the algorithm will help improve the classification accuracy of video footage.
(4)Designed and developed a prototype video shot classification system. Object-oriented design, design and implementation of the effectiveness of the method is sensitive and can be identified based on local sparse representation of a video shot classification prototype system, and verified.
Keywords:video classification, identification, local sensitive, sparse representation
目錄
第一章緒論 1
1.1課題研究背景 1
1.2國內(nèi)外發(fā)展現(xiàn)狀 2
1.3論文內(nèi)容的安排 3
第二章視頻鏡頭預(yù)處理 4
2.1 概述 4
2.2關(guān)鍵幀提取 4
2.3特征值提取 5
2.3.1顏色特征值 5
2.3.2LBP特征提取 7
2.3.3灰度共生矩陣特征提取 7
2.3.4徑向Tchebichef矩特征提取 9
2.4多特征融合 9
2.5本章小結(jié) 10
第三章局部敏感和可鑒別的稀疏表示 11
3.1 概述 11
3.2基于局部敏感和可鑒別的稀疏表示 11
3.2.1鑒別損失函數(shù)的引入 11
3.2.2基于局部敏感和可鑒別稀疏表示字典學(xué)習(xí) 11
3.3基于局部敏感的可鑒別的稀疏表示分類. 13
3.4TRECVID視頻數(shù)據(jù)集實驗結(jié)果分析 14
3.4.1訓(xùn)練字典大小的比較 14
3.4.2兩種語義概念算法識別率比較 14
3.5本章小結(jié) 15
第四章 視頻事件特征提取與選擇原型系統(tǒng)實現(xiàn) 15
4.1概述 15
4.2開發(fā)環(huán)境簡介 15
4.3系統(tǒng)的功能分析 16
4.4系統(tǒng)核心類庫 17
4.4.1 視頻關(guān)鍵幀提取 17
4.4.2視頻特征提取類的實現(xiàn) 17
4.4.3 分類部分實現(xiàn) 18
4.4.4視頻..