Qi Feng

Obtained Ph.D. in September, 2022. Major in image processing.
Currently working as an assistant professor in Waseda University, Japan, Tokyo,
Graduate School of Advanced Science and Engineering,
Department of Pure and Applied Physics, Morishima Lab.

Research Interest

I specialize in using deep learning methods to solve computer graphics and computer vision problems. I am also interested in utilizing CG/CV methods to tackle real-world challenges in Virtual Reality (VR) and Augmented Reality (AR).

Education

  • Sept. 2019 - Sept. 2022

    Doctor of Philosophy - Ph.D.
    Waseda University
  • Sept. 2017 - Sept. 2019

    Master of Engineering - M.E.
    Waseda University
  • Sept. 2013 - Sept. 2017

    Bachelor of Science - B.S.
    Waseda University
  • Sept. 2010 - Sept. 2013

    Senior High School Graduate
    High School Affiliated to Fudan University

Experience

  • Assistant Professor

    Apr. 2021 - Present
    Waseda University
  • Research Intern

    Oct. 2019 - Mar. 2020
    Northumbria University
  • Research Intern

    Jul. 2019 - Sept. 2019
    National Institute of Advanced Industrial Science and Technology (AIST)
  • Research Intern

    Jul. 2015 - Sept. 2015
    Fudan University

Skills

Language

Chinese - Native      English - Full professional proficiency (GRE score 325)
Japanese - Full professional proficiency (JLPT N1)      French - Elementary proficiency (CEFR A2)

Programming Language

Advance: Python, HTML/CSS, SQL      Familiar: C++, C#, Java, Javascript

Libraries/Frameworks/Platforms

PyTorch, Torchvision, Tensorflow, OpenCV      Git, SharePoint      WordPress, Unity3D, Arduino

Deep Learning-related Topics

Object Classification, Semantic segmentation, Depth prediction, Scene reconstruction, Pose estimation, Style transfer, Data synthesis

Others

Microsoft Office 365 (Access, SharePoint), Adobe Creative Cloud (Lightroom Classic, Photoshop, Illustrator, After Effects, Premiere Pro, Audition, InDesign), Ableton, Vocaloid, Blender


Publications

Conferences

Nishizawa, T., Tanaka, K., Hirata, A., Yamaguchi, S., Feng, Q. , Hamanaka, M., & Morishima, S. (2025). SyncViolinist: Music-Oriented Violin Motion Generation Based on Bowing and Fingering. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.

Higasa, T., Tanaka, K., Feng, Q. , & Morishima, S. (2024). Keep Eyes on the Sentence: An Interactive Sentence Simplification System for English Learners Based on Eye Tracking and Large Language Models. CHI EA '24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems.

Feng, Q. , & Morishima, S. (2024). Projection-Based Monocular Depth Prediction for 360 Images with Scale Awareness. Visual Computing Symposium 2024.

Inoue, R., Feng, Q., & Morishima, S. (2024). Non-Dominant Hand Skill Acquisition with Inverted Visual Feedback in a Mixed Reality Environment. Visual Computing Symposium 2024.

Feng, Q. , Shum, H. P., & Morishima, S. (2023). Enhancing perception and immersion in pre-captured environments through learning-based eye height adaptation. 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).

Feng, Q. , Shum, H. P., & Morishima, S. (2023). Learning Omnidirectional Depth Estimation from Internet Videos. In Proceedings of the 26th Meeting on Image Recognition and Understanding.

Higasa, T., Tanaka, K., Feng, Q., & Morishima, S. (2023). Gaze-Driven Sentence Simplification for Language Learners: Enhancing Comprehension and Readability. The 25th International Conference on Multimodal Interaction (ICMI).

Kashiwagi, S., Tanaka, K., Feng, Q., & Morishima, S. (2023). Improving the Gap in Visual Speech Recognition Between Normal and Silent Speech Based on Metric Learning. INTERSPEECH 2023.

Oshima, R., Shinagawa, S., Tsunashima, H., Feng, Q., & Morishima, S. (2023). Pointing out Human Answer Mistakes in a Goal-Oriented Visual Dialogue. ICCV '23 Workshop and Challenge on Vision and Language Algorithmic Reasoning (ICCVW).

Feng, Q. , Shum, H. P., & Morishima, S. (2022). 360 Depth Estimation in the Wild-The Depth360 Dataset and the SegFuse Network. In 2022 IEEE conference on virtual reality and 3D user interfaces (VR). IEEE.

Feng, Q. , Shum, H. P., & Morishima, S. (2021). Bi-projection-based Foreground-aware Omnidirectional Depth Prediction. Visual Computing Symposium 2021.

Feng, Q. , Shum, H. P., Shimamura, R., & Morishima, S. (2020). Foreground-aware Dense Depth Estimation for 360 Images. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2020.

Shimamura, R., Feng, Q., Koyama, Y., Nakatsuka, T., Fukayama, S., Hamasaki, M., ... & Morishima, S. (2020). Audio–visual object removal in 360-degree videos. Computer Graphics International 2020.

Feng, Q. , Shum, H. P., & Morishima, S. (2018, November). Resolving occlusion for 3D object manipulation with hands in mixed reality. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology.

Feng, Q. , Nozawa, T., Shum, H. P., & Morishima, S. (2018, August). Occlusion for 3D Object Manipulation with Hands in Augmented Reality. In Proceedings of The 21st Meeting on Image Recognition and Understanding.

Journals

Nozawa, N., Shum, H. P., Feng, Q., Ho, E. S., & Morishima, S. (2021). 3D car shape reconstruction from a contour sketch using GAN and lazy learning. The Visual Computer, 1-14.

Feng, Q. , Shum, H. P., & Morishima, S. (2020). Resolving hand‐object occlusion for mixed reality with joint deep learning and model optimization. Computer Animation and Virtual Worlds, 31(4-5), e1956.

Feng, Q. , Shum, H. P., Shimamura, R., & Morishima, S. (2020). Foreground-aware Dense Depth Estimation for 360 Images., Journal of WSCG, 28(1-2), 79-88.

Shimamura, R., Feng, Q., Koyama, Y., Nakatsuka, T., Fukayama, S., Hamasaki, M., ... & Morishima, S. (2020). Audio–visual object removal in 360-degree videos. The Visual Computer, 36(10), 2117-2128.

Projects

Open-souce projects released on Github.

360 Depth Estimation in the Wild

We present a method for generating large amounts of color/depth training data from abundant internet 360 videos, and propose a multitasking network to learn single-view depth estimation from it. Results show consistent and sharp predictions.

Contact Me

Tel: +81-3-5286-3510
Fax: +81-3-5286-3510
Address: 55N406 3-4-1 Okubo, Shinjuku-ku, Tokyo, 169-0072
Email: fengqi[at]ruri.waseda.jp