Computer Science Seminar - University of Houston
Skip to main content

Computer Science Seminar

Discovering Semantic Structure for Image and Video Browsing

When: Friday, April 29, 2013
Where: PGH 232
Time: 11:00 AM

Speaker: Dr. Kristen Grauman, University of Texas at Austin

Host: Dr. Shishir Shah

Widespread visual sensors and unprecedented connectivity have left us awash with visual data---from online photo collections, home videos, news footage, medical images, or surveillance feeds. How can we efficiently browse image and video collections based on semantically meaningful criteria? How can we bring order to the data, beyond manually defined keyword tags? We are exploring these questions in our recent work on interactive visual search and summarization.

I will first present our work on automatic video summarization. Given a long video, the goal is to produce a short storyboard summary. Whereas existing methods define sampling-based objectives (e.g., to maximize diversity in the output summary), we take a "story-driven" approach that predicts the high-level importance of objects and their influence between subevents. We show this leads to substantially more accurate summaries of egocentric video, allowing a viewer to quickly understand the gist of a long video. Turning to visual search, I will then present a novel form of interactive feedback, in which a user helps pinpoint the content of interest by making visual comparisons between his envisioned target and reference images. The approach relies on a powerful mid-level representation of interpretable relative attributes to connect the user's descriptions to the system's internal features. Whereas traditional feedback limits input to coarse binary labels, the proposed WhittleSearch lets a user state precisely what about an image is relevant. We show this allows the system to more rapidly converge on the desired content.

This is work done with Adriana Kovashka, Yong Jae Lee, Devi Parikh, and Lu Zheng.

Bio:
Kristen Grauman is an Associate Professor in the Department of Computer Science at the University of Texas at Austin. Her research in computer vision and machine learning focuses on visual search and object recognition. Before joining UT-Austin in 2007, she received her Ph.D. in the EECS department at MIT, in the Computer Science and Artificial Intelligence Laboratory. She is an Alfred P. Sloan Research Fellow and Microsoft Research New Faculty Fellow, a recipient of NSF CAREER and ONR Young Investigator awards, and the recipient of the 2013 Computers and Thought Award from the International Joint Conference on Artificial Intelligence. She and her collaborators were recognized with the CVPR Best Student Paper Award in 2008 for their work on hashing algorithms for large-scale image retrieval and the Marr Best Paper Prize at ICCV in 2011 for their work on modeling relative visual attributes.