Michael S. Ryoo
Ph.D.

Visiting Faculty, Google Brain

Assistant Professor
Department of Computer Science
Department of Intelligent Systems Engineering
Indiana University Bloomington

Founder & CTO, EgoVid Inc.

Contact
+1-812-855-9190
mryoo-at-indiana.edu or mryoo-at-egovid.com

I am on leave at Google Brain as of Fall 2018.

I am an Assistant Professor in the Department of Computer Science (CS) and the Department of Intelligent Systems Engineering (ISE) at Indiana University. I also have an adjunct position at the NASA's Jet Propulsion Laboratory (JPL), California Institute of Technology as their Research Affiliate. Before joining IU, I was a staff researcher within the Robotics Section of JPL from October 2011 to July 2015. Prior to that, I completed my national duty at ETRI, Korea, working as a research scientist. I received my Ph.D. from the University of Texas at Austin in 2008, and the B.S. degree from the Korea Advanced Institute of Science and Technology (KAIST) in 2004.


Recent News
2018/11: Research on neural architecture search for video understanding: Evolving Space-Time Neural Architectures for Videos
2018/09: A new privacy-preserving activity recognition work at ECCV 2018: Learning to Anonymize Faces for Privacy Preserving Action Detection
We also did its real-time demo at the conference.
2018/06: Organized a Tutorial on Human Activity Recognition at CVPR 2018, together with Greg Mori and Kris Kitani.
2018/05: New research on robot deep reinforcement learning with an environment model learning: Learning Real-World Robot Policies by Dreaming.
2018/03: A new paper on detecting multiple activities from continuous videos at CVPR 2018, obtaining the state-of-the-art performances:
Learning Latent Super-Events to Detect Multiple Activities in Videos. Its source code is now available at [github].
2017/09: Research on deep learning for robotics: Learning Robot Activities from First-Person Human Videos Using Convolutional Future Regression
The paper appeared at IROS 2017 [video]. It won the Best Paper Award at CVPR 2017 Workshop on Deep Learning for Robot Vision.
2017/02: Presented a paper on Privacy-Preserving Human Activity Recognition from Extreme Low Resolution at AAAI 2017.
2016/06: Organized the 4th workshop on Egocentric (First-Person) Vision at CVPR 2016 with Kris Kitani, Yong Jae Lee, and Yin Li.
2016/05: Won the Best Paper Award in Robot Vision from ICRA 2016.
2015/03: My robot-centric activity prediction paper was one of the two nominees for the Best Enabling Technology Award at HRI 2015.

Curriculum Vitae pdf


Publications [by topic] [by type] [by year]

List of selected recent publications

  • A. Piergiovanni, A. Angelova, A. Toshev, and M. S. Ryoo, "Evolving Space-Time Neural Architectures for Videos", arXiv:1811.10636. [arXiv]
  • A. Piergiovanni and M. S. Ryoo, "Representation Flow for Action Recognition", arXiv:1810.01455. [arXiv]
  • A. Piergiovanni, A. Wu, and M. S. Ryoo, "Learning Real-World Robot Policies by Dreaming", arXiv:1805.07813. [arXiv] [project]
  • A. Piergiovanni and M. S. Ryoo, "Temporal Gaussian Mixture Layer for Videos", arXiv:1803.06316. [arXiv]
  • Z. Ren, Y. J. Lee, and M. S. Ryoo, "Learning to Anonymize Faces for Privacy Preserving Action Detection", ECCV 2018. [arXiv] [project]
  • A. Piergiovanni and M. S. Ryoo, "Learning Latent Super-Events to Detect Multiple Activities in Videos", CVPR 2018. [arXiv] [github_code]
  • M. S. Ryoo, K. Kim, and H. J. Yang, "Extreme Low Resolution Activity Recognition with Multi-Siamese Embedding Learning", AAAI 2018. [arXiv]
  • J. Lee and M. S. Ryoo, "Learning Robot Activities from First-Person Human Videos Using Convolutional Future Regression", IROS 2017. [arXiv] [video]
  • C. Fan, J. Lee, M. Xu, K. K. Singh, Y. J. Lee, D. J. Crandall, and M. S. Ryoo, "Identifying First-person Camera Wearers in Third-person Videos", CVPR 2017. [arXiv]
  • A. Piergiovanni, C. Fan, and M. S. Ryoo, "Learning Latent Sub-events in Activity Videos Using Temporal Attention Filters", AAAI 2017. [arXiv] [github_code]
Google Scholar page: Google Scholar: Michael S. Ryoo

Datasets

MLB-YouTube dataset: an activity recognition dataset with over 42 hours of 2017 MLB post-season baseball videos.
JPL-Interaction dataset: a robot-centric first-person video dataset.
DogCentric Activity dataset: a first-person video dataset taken with dogs.
UT-Interaction dataset: a dataset containing continuous/segmented videos of human-human interactions.

Lab members

AJ Piergiovanni (CS PhD student)
Alex Seewald (CS PhD student)
Alan Wu (Engineering PhD student)
Maria Soledad Elli (CS MS student)

Teaching

B457/I400: Intro to Computer Vision (Spring 2018)
B659/I590: Vision for Intelligent Robotics (Fall 2017)

Updated 12/2018