您的当前位置:首页新闻中心学术活动
学术报告:A Discriminative CNN Representation for Untrimmed Videos
文章来源: 发布时间:2016-10-17 【字号:

  题目:A Discriminative CNN Representation for Untrimmed Videos 

  报告人: Yi YangAssociate Professor, University of Technology Sydney  

  报告时间:20161019日星期三 14:00-16:00               

  报告地点:中国科学院信息工程研究所3号楼(B2座)3224  

  Abstract  

  I will talk about a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkits. This work makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6% to 36.8% for the TRECVID MEDTest 14 dataset and from 34.0% to 44.6% for the TRECVID MEDTest 13 dataset.  

  Short Bio 

  Yi Yang is an Associate professor and Deputy Head of School with the School of Software, University of Technology Sydney. He received received the Ph.D. degree in computer science from Zhejiang University in 2010. He was a Post-Doctoral Research Fellow with the School of Computer Science, Carnegie Mellon University, from 2011 to 2013. His research interests include multimedia and computer vision. 

附件: