Analyzing Global Feature for Duplicate Video Retrieval using CNN and PCA
N. Gayathri1, K. Mahesh2

1N. Gayathri, Ph.D. Scholar, Department of Computer Applications, Alagappa University, Karaikudi (Tamil Nadu), India.

2Dr. K. Mahesh, Professor, Department of Computer Applications, Alagappa University, Karaikudi (Tamil Nadu), India.

Manuscript received on 25 February 2020 | Revised Manuscript received on 05 March 2020 | Manuscript Published on 15 March 2020 | PP: 100-105 | Volume-9 Issue-4S2 March 2020 | Retrieval Number: D10010394S220/2020©BEIESP | DOI: 10.35940/ijitee.D1001.0394S220

Open Access | Editorial and Publishing Policies | Cite | Zenodo | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open-access article under the CC-BY-NC-ND license (

Abstract: At present, Duplicate video retrieval has attained the interest of researchers because of the vast amount of online videos. This video retrieval has extensive applications like online video monitoring, copyright protection, and automatic video tagging. Some local features constitute primary building blocks in this video retrieval algorithms; with this, most researchers use local information for feature representation. Moreover, this local knowledge-based representation eliminates more prominent information regarding global distribution. However, the discriminative power of local descriptors is diminished by feature quantifiers. The ultimate goal is to use universal features to categorize similar keyframes into the same class that is essential to enhance video retrieval performance. Here, CNN features acquire global geometric distribution of video, from which discrete features are considered for computation. Discretization is performed with principal component analysis. These kinds of features maintain geometry transformation with reduced noise. Next, an integration strategy known as k-NN is used to merge these features with global VR features for enhancing recognition accuracy. Experimentation has been carried out with available datasets to show that the anticipated model outperforms existing approaches in VR applications.

Keywords: Video Retrieval, Video Tagging, CNN, Global Geometric Features, Discretization, k-NN.
Scope of the Article: Knowledge Representation and Retrievals