avatar

Undergraduate Student
Department of Computer Science
National Yang Ming Chiao Tung University, Taiwan

Email: jayinnn.cs10@nycu.edu.tw

Personal Page | GitHub | LinkedIn | Google Scholar | Personal Blog | CV

About Me

Hi! I’m Jie-Ying Lee, a Computer Science undergraduate at National Yang Ming Chiao Tung University and former exchange student at ETH Zurich. I currently serve as a research assistant at the Computational Photography Lab under Prof. Yu-Lun Liu.

In Summer 2024, I interned with Google’s Pixel Camera Team, where I integrated the Segment Anything Model (SAM) for mobile devices, hosted by Yu-Lin Chang and Chung-Kai Hsieh. My industry experience also includes positions as an R&D Intern at Microsoft and a Backend Engineer Intern at Appier.

I’m planning to pursue a Ph.D. in 2026 and am actively seeking research collaborations.

Outside the lab, I enjoy badminton, dance, and photography.

Research Interest

News

Publications

AuraFusion360: Augmented Unseen Region Alignment for Reference-based 360° Unbounded Scene Inpainting teaser image

AuraFusion360: Augmented Unseen Region Alignment for Reference-based 360° Unbounded Scene Inpainting

Chung-Ho Wu*, Yang-Jung Chen*, Ying-Huan Chen, Jie-Ying LeeBo-Hsu KeChun-Wei Tuan Mu, Yi-Chuan Huang, Chin-Yang LinMin-Hung ChenYen-Yu LinYu-Lun Liu (*Equal Contribution)

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025

The approach introduces (1) depth-aware unseen mask generation for accurate occlusion identification, (2) Adaptive Guided Depth Diffusion, a zero-shot method for accurate initial point placement without requiring additional training, and (3) SDEdit-based detail enhancement for multi-view coherence.
SpectroMotion: Dynamic 3D Reconstruction of Specular Scenes teaser image

SpectroMotion: Dynamic 3D Reconstruction of Specular Scenes

Cheng-De Fan, Chen-Wei Chang, Yi-Ruei Liu, Jie-Ying LeeJiun-Long HuangYu-Chee TsengYu-Lun Liu

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025

SpectroMotion is presented, a novel approach that combines 3D Gaussian Splatting with physically-based rendering (PBR) and deformation fields to reconstruct dynamic specular scenes and is the only existing 3DGS method capable of synthesizing photorealistic real-world dynamic specular scenes.
BoostMVSNeRFs: Boosting MVS-based NeRFs to Generalizable View Synthesis in Large-scale Scenes teaser image

BoostMVSNeRFs: Boosting MVS-based NeRFs to Generalizable View Synthesis in Large-scale Scenes

Chih-Hai Su*, Chih-Yao Hu*, Shr-Ruei Tsai*, Jie-Ying Lee*, Chin-Yang LinYu-Lun Liu (*Equal Contribution)

ACM Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH), 2024

This paper presents a novel approach called BoostMVSNeRFs to enhance the rendering quality of MVS-based NeRFs in large-scale scenes, and identifies limitations in MVS-based NeRF methods, such as restricted viewport coverage and artifacts due to limited input views.