Michael S. Ryoo
Ph.D.
Assistant Professor
Department of Computer Science and Informatics
School of Informatics and Computing
Indiana University Bloomington

Founder & CTO, EgoVid Inc.

Contact
+1-812-855-9190
mryoo-at-indiana.edu or mryoo-at-egovid.com

I am an Assistant Professor of the School of Informatics and Computing at Indiana University. I also am affiliated with the NASA's Jet Propulsion Laboratory (JPL), California Institute of Technology as their Research Affiliate. Before joining IU, I was a Research Technologist within the Robotics Section of JPL from October 2011 to July 2015. Prior to that, I completed my national duty at ETRI, Korea, working as a research scientist. I received my Ph.D. from the University of Texas at Austin in 2008, and the B.S. degree from the Korea Advanced Institute of Science and Technology (KAIST) in 2004.


Recent News
2017/03: New research on deep robot learning: Learning Robot Activities from First-Person Human Videos Using Convolutional Future Regression.
Its example videos can be viewed at [example_robot_demo_video].
2017/03: One paper to appear at ICRA 2017, and one paper to appear at CVPR 2017. The paper details will become available soon.
2017/02: Presented a paper on Privacy-Preserving Human Activity Recognition from Extreme Low Resolution at AAAI 2017.
2017/02: Presented a paper on Learning Latent Sub-events in Activity Videos Using Temporal Attention Filters at AAAI 2017.
2016/05: Won the Best Vision Paper Award from ICRA 2016.
2016/06: Organized the 4th workshop on Egocentric (First-Person) Vision at CVPR 2016 with Kris Kitani, Yong Jae Lee, and Yin Li.
2015/03: My robot-centric activity prediction paper was one of the two nominees for the Best Enabling Technology Award at HRI 2015.

Curriculum Vitae pdf


Publications [by topic] [by type] [by year]

List of selected recent publications

  • J. Lee and M. S. Ryoo, "Learning Robot Activities from First-Person Human Videos Using Convolutional Future Regression", arXiv:1703.01040, March 2017. [arXiv_preprint] [demo_video]
  • A. Piergiovanni, C. Fan, and M. S. Ryoo, "Learning Latent Sub-events in Activity Videos Using Temporal Attention Filters", the 31st AAAI Conference on Artificial Intelligence (AAAI), February 2017. [arXiv_preprint] [source_code]
  • M. S. Ryoo, B. Rothrock, C. Fleming, and H. J. Yang, "Privacy-Preserving Human Activity Recognition from Extreme Low Resolution", the 31st AAAI Conference on Artificial Intelligence (AAAI), February 2017. [arXiv_preprint]
  • M. S. Ryoo, B. Rothrock, and L. Matthies, "Pooled Motion Features for First-Person Videos", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. [arXiv_preprint] [source_code]
  • M. S. Ryoo, T. J. Fuchs, L. Xia, J. K. Aggarwal, and L. Matthies, "Robot-Centric Activity Prediction from First-Person Videos: What Will They Do to Me?", ACM/IEEE International Conference on Human-Robot Interaction (HRI), March 2015 (full paper). pdf dataset
  • M. S. Ryoo and L. Matthies, "First-Person Activity Recognition: What Are They Doing to Me?", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2013. pdf video dataset
  • M. S. Ryoo, "Human Activity Prediction: Early Recognition of Ongoing Activities from Streaming Videos", International Conference on Computer Vision (ICCV), Barcelona, Spain, November 2011. pdf results
Google Scholar page: Google Scholar: Michael S. Ryoo

Datasets

JPL-Interaction dataset: a robot-centric first-person video dataset.
DogCentric Activity dataset: a first-person video dataset taken with dogs.
UT-Interaction dataset: a dataset containing continuous/segmented videos of human-human interactions.

Lab members

AJ Piergiovanni (PhD student)
Alex Seewald (PhD student)
Jangwon Lee (PhD student, co-advised with Selma Sabanovic and David Crandall)
Maria Soledad Elli (MS student)

Teaching

I400/B457: Intro to Computer Vision (Spring 2017)
I590/B659: Vision for Intelligent Robotics (Fall 2016)
I400/B490: Intro to Computer Vision (Spring 2016)
I590/B659: Vision for Intelligent Robotics (Fall 2015)

Updated 02/04/2017