top of page
Physical AI

Our lab studies auditory intelligence, developing AI systems that understand surrounding situations through sound and extending them toward multimodal intelligence across audio, vision, and language.

Ultimately, we aim to build Physical AI for robotic platforms that can hear, understand, adapt, and act in real-world environments.

image_edited.jpg
Auditory and Multimodal

Auditory and
Multimodal Intelligence

We study AI that goes beyond simple audio classification to understand situations from sound alone. Our research explores how machines can infer events, context, and environmental states from auditory signals, and how these capabilities can be extended to multimodal reasoning by integrating audio with vision and language. By linking audio understanding with VLM-based reasoning, we aim to develop systems that can interpret complex situations through multiple sensory modalities.

image_edited_edited.jpg
Continual learning

Continual Learning

Because real-world environments are constantly changing, intelligent systems must be able to adapt while retaining previously acquired knowledge. We study continual learning to make auditory and multimodal AI robust under changing domains, devices, and conditions. Our goal is to enable long-term, reliable adaptation in open and dynamic environments.

image_edited_edited.jpg
Physical AI

Robotics /
Physical AI

We ultimately seek to embed these capabilities into robotic systems and embodied agents. By combining perception, reasoning, and adaptation, we aim to build Physical AI that can operate robustly in the physical world. This research bridges sound-based intelligence and multimodal understanding to embodied action and interaction.

physical_edited.jpg

Brain and Artificial Intelligence Lab. Department of Applied Artificial Intelligence, SeoulTech, 232 Gongneung-ro, Nowon-gu, Seoul, 01811, Korea

Copyright (c) 2026 BrainAI Lab. All Rights Reserved.

bottom of page