top of page
3D/4D Generative Model
3D/4D generation aims to model both spatial structure and temporal dynamics from limited observations.
Our team studies generative models that enable consistent scene reconstruction, dynamic scene generation, novel view synthesis, and unified spatio-temporal representations capturing geometry, appearance, and motion.
3D Generation
Our lab studies 3D generation with the goal of enabling AI to create and understand three-dimensional structures from limited observations. Humans can easily imagine the full shape of an object or a scene even when only a small portion is visible. We explore how generative models can learn this ability by building representations that capture geometry, appearance, and spatial relationships.
Our research focuses on generating consistent 3D scenes from images, synthesizing novel viewpoints, and constructing high-quality 3D assets using modern generative modeling techniques. Through this work, we aim to develop AI systems that can produce realistic virtual environments and support applications in simulation, graphics, and intelligent systems.

4D Generation
Our lab studies 4D generation with the goal of enabling AI to model dynamic scenes that change over time. While 3D generation focuses on reconstructing spatial structure, real-world environments are inherently dynamic, containing motion, interactions, and temporal variations. Humans naturally understand how objects move and how scenes evolve, even from limited visual observations. Inspired by this ability, we develop generative models that learn spatio-temporal representations from images, videos, and multimodal signals.
Our research focuses on generating consistent dynamic scenes, synthesizing motion across time, and building representations that jointly capture geometry, appearance, and temporal behavior. Through this work, we aim to create AI systems that can generate realistic dynamic environments and support applications in simulation, robotics, and immersive virtual worlds.

bottom of page
