Michael S. Ryoo
Ph.D.

Visiting Faculty, Google Brain

Assistant Professor
Department of Intelligent Systems Engineering
Department of Computer Science
Indiana University Bloomington

Founder & CTO, EgoVid Inc.

Contact
+1-812-855-9190
mryoo-at-indiana.edu or mryoo-at-egovid.com

I am on leave at Google Brain as of Fall 2018.

I am an Assistant Professor in the Department of Intelligent Systems Engineering at Indiana University. I also have an adjunct position at the NASA's Jet Propulsion Laboratory (JPL), California Institute of Technology as their Research Affiliate. Before joining IU, I was a staff researcher within the Robotics Section of JPL from October 2011 to July 2015. Prior to that, I completed my national duty at ETRI, Korea, working as a research scientist. I received my Ph.D. from the University of Texas at Austin in 2008, and the B.S. degree from the Korea Advanced Institute of Science and Technology (KAIST) in 2004.


Recent News
2018/09: A new privacy-preserving activity recognition work at ECCV 2018: Learning to Anonymize Faces for Privacy Preserving Action Detection
We also did its real-time demo at the conference.
2018/06: Organized a Tutorial on Human Activity Recognition at CVPR 2018, together with Greg Mori and Kris Kitani.
2018/05: New research on robot deep reinforcement learning with an environment model learning: Learning Real-World Robot Policies by Dreaming.
2018/03: A new paper on detecting multiple activities from continuous videos at CVPR 2018, obtaining the state-of-the-art performances:
Learning Latent Super-Events to Detect Multiple Activities in Videos. Its source code is now available at [github].
2017/09: Research on deep learning for robotics: Learning Robot Activities from First-Person Human Videos Using Convolutional Future Regression
The paper appeared at IROS 2017 [video]. It won the Best Paper Award at CVPR 2017 Workshop on Deep Learning for Robot Vision.
2017/02: Presented a paper on Privacy-Preserving Human Activity Recognition from Extreme Low Resolution at AAAI 2017.
2016/06: Organized the 4th workshop on Egocentric (First-Person) Vision at CVPR 2016 with Kris Kitani, Yong Jae Lee, and Yin Li.
2016/05: Won the Best Paper Award in Robot Vision from ICRA 2016.
2015/03: My robot-centric activity prediction paper was one of the two nominees for the Best Enabling Technology Award at HRI 2015.

Curriculum Vitae pdf


Publications [by topic] [by type] [by year]

List of selected publications

  • A. Piergiovanni and M. S. Ryoo, "Representation Flow for Action Recognition", arXiv:1810.01455, October 2018. [arXiv]
  • A. Piergiovanni, A. Wu, and M. S. Ryoo, "Learning Real-World Robot Policies by Dreaming", arXiv:1805.07813, May 2018. [arXiv] [project]
  • A. Piergiovanni and M. S. Ryoo, "Temporal Gaussian Mixture Layer for Videos", arXiv:1803.06316, March 2018. [arXiv]
  • Z. Ren, Y. J. Lee, and M. S. Ryoo, "Learning to Anonymize Faces for Privacy Preserving Action Detection", European Conference on Computer Vision (ECCV), September 2018. [arXiv] [project]
  • A. Piergiovanni and M. S. Ryoo, "Learning Latent Super-Events to Detect Multiple Activities in Videos", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. [arXiv] [github_code]
  • J. Lee and M. S. Ryoo, "Learning Robot Activities from First-Person Human Videos Using Convolutional Future Regression", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2017. [arXiv] [video]
  • C. Fan, J. Lee, M. Xu, K. K. Singh, Y. J. Lee, D. J. Crandall, and M. S. Ryoo, "Identifying First-person Camera Wearers in Third-person Videos", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. [arXiv]
  • A. Piergiovanni, C. Fan, and M. S. Ryoo, "Learning Latent Sub-events in Activity Videos Using Temporal Attention Filters", the 31st AAAI Conference on Artificial Intelligence (AAAI), February 2017. [arXiv] [github_code]
  • M. S. Ryoo, B. Rothrock, C. Fleming, and H. J. Yang, "Privacy-Preserving Human Activity Recognition from Extreme Low Resolution", the 31st AAAI Conference on Artificial Intelligence (AAAI), February 2017. [arXiv]
  • M. S. Ryoo, B. Rothrock, and L. Matthies, "Pooled Motion Features for First-Person Videos", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. [arXiv] [github_code]
  • M. S. Ryoo, T. J. Fuchs, L. Xia, J. K. Aggarwal, and L. Matthies, "Robot-Centric Activity Prediction from First-Person Videos: What Will They Do to Me?", ACM/IEEE International Conference on Human-Robot Interaction (HRI), March 2015. [pdf] [dataset]
  • M. S. Ryoo and L. Matthies, "First-Person Activity Recognition: What Are They Doing to Me?", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2013. [pdf] [dataset] [video]
Google Scholar page: Google Scholar: Michael S. Ryoo

Datasets

MLB-YouTube dataset: an activity recognition dataset with over 42 hours of 2017 MLB post-season baseball videos.
JPL-Interaction dataset: a robot-centric first-person video dataset.
DogCentric Activity dataset: a first-person video dataset taken with dogs.
UT-Interaction dataset: a dataset containing continuous/segmented videos of human-human interactions.

Lab members

AJ Piergiovanni (CS PhD student)
Alex Seewald (CS PhD student)
Alan Wu (Engineering PhD student)
Maria Soledad Elli (CS MS student)

Teaching

B457/I400: Intro to Computer Vision (Spring 2018)
B659/I590: Vision for Intelligent Robotics (Fall 2017)

Updated 07/2018