Michael S. Ryoo
Ph.D.

SUNY Empire Innovation Associate Professor
AI Institute; Department of Computer Science
Stony Brook University

Robotics at Google
Google Brain


Contact
Instructions for students
mryoo-at-cs.stonybrook.edu

As of September 2019, I joined the Department of Computer Science (CS) at Stony Brook University as an associate professor. I also am with Google Brain's "Robotics at Google" as a research scientist. Previously, I was an assistant professor at Indiana University Bloomington, and was a staff researcher within the Robotics Section of the NASA's Jet Propulsion Laboratory (JPL). I received my Ph.D. from the University of Texas at Austin in 2008 and B.S. from Korea Advanced Institute of Science and Technology (KAIST) in 2004.


Recent News
2021/04: Recognizing Actions in Videos from Unseen Viewpoints and Coarse-Fine Networks for Temporal Activity Detection in Videos at CVPR 2021.
2021/04: Neural architecture search for robot reinforcement learning at ICRA 2021: Visionary: Vision Architecture Discovery for Robot Learning
2020/11: Published a new large-scale video dataset respecting diversity, privacy, and licenses at NeurIPS 2020:
AViD Dataset: Anonymized Videos from Diverse Countries. Dataset URL: [link].
2020/08: Four papers at ECCV 2020: on Adversarial Grammar, on Password-conditioned Face Anonymization, on AttentionNAS, and on AssembleNet++.
2020/03: Self-supervised video representation learning at CVPR 2020: Evolving Losses for Unsupervised Video Representation Learning
2019/10: Google AI blog article on Video Architecture Search. It summarizes our recent effort on neural architecture search for videos including AssembleNet at ICLR 2020 and EvaNet at ICCV 2019.
2019/06: Organized a Tutorial on Unifying Human Activity Understanding at CVPR 2019, together with Gunnar Sigurdsson.
2019/06: Temporal Gaussian Mixture (TGM) Layer at ICML 2019 and Representation Flow at CVPR 2019, both on video representation learning.
2018/09: Privacy-preserving activity recognition work at ECCV 2018: Learning to Anonymize Faces for Privacy Preserving Action Detection
We also did its real-time demo at the conference.
2018/05: Robot deep reinforcement learning with an environment model learning: Learning Real-World Robot Policies by Dreaming.

Curriculum Vitae pdf


Publications [by topic] [by type] [by year]

List of selected recent publications

  • A. Piergiovanni, A. Angelova, and M. S. Ryoo, "Evolving Losses for Unsupervised Video Representation Learning", CVPR 2020. [arXiv]
  • M. S. Ryoo, A. Piergiovanni, M. Tan, A. Angelova, "AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures", ICLR 2020. [arXiv]
  • A. Piergiovanni, A. Angelova, and M. S. Ryoo, "Differentiable Grammars for videos", AAAI 2020. [arXiv]
  • A. Piergiovanni, A. Wu, and M. S. Ryoo, "Learning Real-World Robot Policies by Dreaming", IROS 2019. [arXiv] [project]
  • A. Wu, A. Piergiovanni, and M. S. Ryoo, "Model-based Behavioral Cloning with Future Image Similarity Learning", CoRL 2019. [arXiv] [project/code]
  • A. Piergiovanni, A. Angelova, A. Toshev, and M. S. Ryoo, "Evolving Space-Time Neural Architectures for Videos", ICCV 2019. [arXiv]
  • A. Piergiovanni and M. S. Ryoo, "Temporal Gaussian Mixture Layer for Videos", ICML 2019. [arXiv] [github_code]
  • A. Piergiovanni and M. S. Ryoo, "Representation Flow for Action Recognition", CVPR 2019. [arXiv] [github_code]
  • Z. Ren, Y. J. Lee, and M. S. Ryoo, "Learning to Anonymize Faces for Privacy Preserving Action Detection", ECCV 2018. [arXiv] [project]
  • A. Piergiovanni and M. S. Ryoo, "Learning Latent Super-Events to Detect Multiple Activities in Videos", CVPR 2018. [arXiv] [github_code]
  • M. S. Ryoo, K. Kim, and H. J. Yang, "Extreme Low Resolution Activity Recognition with Multi-Siamese Embedding Learning", AAAI 2018. [arXiv]
  • J. Lee and M. S. Ryoo, "Learning Robot Activities from First-Person Human Videos Using Convolutional Future Regression", IROS 2017. [arXiv] [video]
Google Scholar page: Google Scholar: Michael S. Ryoo

Datasets

AViD dataset: Anonymized Videos from Diverse Countries.
MLB-YouTube dataset: an activity recognition dataset with over 42 hours of 2017 MLB post-season baseball videos.
JPL-Interaction dataset: a robot-centric first-person video dataset.
DogCentric Activity dataset: a first-person video dataset taken with dogs.
UT-Interaction dataset: a dataset containing continuous/segmented videos of human-human interactions.

Lab members

Alan Wu (Indiana University ISE)
Cristina Mata (Stony Brook University CS)
Kumara Kahatapitiya (Stony Brook University CS)
Jinghuan Shang (Stony Brook University CS)
Xiang Li (Stony Brook University CS)

PhD alumni

AJ Piergiovanni (2020; joined Robotics at Google)

Teaching

CSE525: Intro to Robotics (Spring 2021)
B457/I400: Intro to Computer Vision (Spring 2018)
B659/I590: Vision for Intelligent Robotics (Fall 2017)

Updated 01/2020