3rd Workshop on Egocentric (First-person) Vision

In conjunction with CVPR 2014


Paper/Abstract Submission

April 8, 2014

Acceptance Notification

May 3, 2014 

One-day Workshop

June 28, 2014 (location: Room C113-114), starting 9:00AM




Chieko Asakawa

(IBM Research)

Steve Mann

(U. of Toronto)

Yaser Sheikh



Egocentric vision provides a unique perspective of the visual world that is inherently human-centric. Since egocentric cameras are mounted on the user (typically on the user's head), they are naturally primed to gather visual information from our everyday interactions. We believe that this human-centric characteristic of egocentric vision can have a large impact on the way we approach central computer vision tasks such as visual detection, recognition, prediction and socio-behavioral analysis.

By taking advantage of the first-person point-of-view paradigm, there have been recent advances in areas such as personalized video summarization, understanding concepts of social saliency, activity analysis with inside-out cameras (a camera to capture eye gaze and an outward-looking camera), recognizing human interactions and modeling focus of attention. However, in many ways we are as a community only beginning to understand the full potential (and limitations) of the first-person paradigm. In the workshop, we will bring together researchers to discuss emerging topics such as:

To this end, we invite submissions of papers (4-8 pages) and extended abstracts (2 pages, ongoing or published) in all fields of vision that explore the egocentric perspective, including, but not limited to:

The best paper award sponsored by Intel will be provided to the paper with the highest potential impact.






9:00 - 9:45


Steve Mann
(Univ. of Toronto)

Egography: Egocentric Videographic/

Photographic Wearable Computer Vision for

Personal Imaging with Wearable Cameras

9:45 - 10:30


Chieko Asakawa

Can a Blind Person See Your World?

10:30 - 12:00

Spotlights and Poster Session A

S Narayan, M Kankanhalli, K Ramakrishnan

Action and Interaction Recognition in First-person videos

Y Liu, Y Jang, W Woo, TK Kim

Video-based Object Recognition using Novel Set-of-Sets Representations

V Chandrasekhar, W Min, L Xiaoli, C Tan, B Mandal, L Li, JH Lim

Efficient Retrieval from Large-Scale Egocentric Visual Data Using a Sparse Graph Representation

C Tan, H Goh, V Chandrasekhar, L Li, JH Lim

Understanding the Nature of First-Person Videos: Characterization and Classification using Low-Level Features

S Bambach, S Lee, D Crandall, J Franchak, C Yu

Best Paper Award

This Hand Is My Hand: A Probabilistic Approach to Hand Disambiguation in Egocentric Video

K Matsuo, K Yamada, S Ueno, S Naito

An Attention-based Activity Recognition for Egocentric Video

J Barker, J Davis

Temporally-Dependent Dirichlet Process Mixtures for Egocentric Video Segmentation

TS Leung, G Medioni

Visual Navigation aid for the blind in dynamic environments

Y Hoshen, G Ben-Artzi, S Peleg

Wisdom of the Crowd in Egocentric Video Curation

S Alletto, G Serra, S Calderara, F Solera, R Cucchiara

From Ego to Nos-vision: Detecting Social Relationships in First-Person Views

12:00 - 1:15

Lunch Break

1:15 - 2:00


Yaser Shiekh

A Measure and Theory of 3D Joint Attention from First Person Cameras

2:00 - 4:00

Spotlights and Poster session B

A Betancourt, M López, C Regazzoni, M Rauterberg

A Sequential Classifier for Hand Detection in the Framework of Egocentric Vision

J Li

Eye-Model-Based Gaze Estimation by RGB-D Camera

M Moghimi, A Murillo, S Belongie

Experiments on a RGB-d wearable vision system for egocentric activity recognition

H Pirsiavash, D Ramanan

Parsing videos of actions with segmental grammars

M Higuchi, K Kitani, Y Sato

Estimating Relative Social Status from Face-to-Face Interactions using First-person Vision

YH Lee, G Medioni

Wearable RGB-D navigation system For The Blind

S Yeung, A Fathi, L Fei-Fei

VideoSET: Video Summary Evaluation Toolkit

Y Iwashita, A Takamine,

R Kurazume, M S Ryoo

First-Person Activity Recognition from Animal Videos

A Saran, K Kitani

2D Hand Parsing for Egocentric Gesture Recognition

VBuso, J Benois-Pineau, I Gonzalez-Diaz

Object recognition in egocentric videos with saliency-based non uniform sampling and variable resolution space for features selection

G Bourmaud, R Megret, A Giremus, Y Berthoumieu

Indoor trajectory estimation from wearable camera for activity monitoring

M Okamoto, Y Kawano, K Yanai

Summarization of Egocentric Moving Videos

for Generating Walking Route Guidance

G Rogez, M Khademi, JS Supancic, JMM Montiel, D Ramanan

3D Hand Pose Detection in Egocentric RGB-D Images

H Fujiyoshi, M Kimura, S Shimizu, Y Yamauchi, T Yamashita

3-D Gaze Scan Path by Inside-out Camera System

R Templeman, M Korayem, A Kapadia, D Crandall

PlaceAvoider: Steering First-Person Cameras away from Sensitive Spaces

2:00 - 4:00

Demo session

Shue-Ching, B Mandal, V Chandrasekhar, C Tan, L Li, JH Lim

Real-time face detection and recognition on Google Glass

3:45 - 4:00


Intel best paper award announcement



Kris Kitani 

Yong Jae Lee 
(UC Berkeley)

Michael Ryoo 

Alireza Fathi 

workshop contact:

Program Committee:

Serge Belongie (Cornell Tech)

Vishnu Boddet (CMU)

Chao-Yeh Chen (Univ. of Texas at Austin)

Samarjit Das (Bosch Research)

Hironobu Fujiyoshi (Chubu Univ.)

Sung Ju Hwang (Disney Research)

Laurent Itti (USC)

Nebojsa Jojic (MSR)

Christopher Kanan (JPL)

Adriana Kovashka (Univ. of Texas at Austin)

Walterio Mayol-Cuevas (Univ. of Bristol)

Remi Megret (Univ. of Bordeaux)

Yair Movshovitz-Attias (CMU)

Michael Maire (Caltech)

Ana Cristina Murillo (Univ. of Zaragoza)

Hyun Soo Park (CMU)

Hamed Pirsiavash (MIT)

Xiaofeng Ren (Amazon)

Yoichi Sato (Univ. of Tokyo)

Takaaki Shiratori (MSRA)

Hyun Oh Song (UC Berkeley)

Yusuke Sugano (University of Tokyo)

Stella Yu (ICSI / UC Berkeley)

Lu Zheng (CUHK)


Platinum sponsor: