Dissertation Defense
Deep Learning Models for Visual and Interoceptive Neural Processing
This event is free and open to the publicAdd to Google Calendar
PASSCODE: 1357
Humans perceive their external environment and internal body through complex neural processing. Exteroception, through vision, enables the brain to understand the world, make decisions, and take actions, while interoception allows the brain to regulate physiological needs by sensing internal organs. Although these processes seem distinct, emerging evidence suggests they may share similar neural computation mechanisms. My dissertation research uses deep learning to model both vision and interoception, based on neuroscience insights and validated against human behaviors and brain activity from functional magnetic resonance imaging.
My vision model features two streams: one processes coarse input for eye movement control, and the other handles fine input for detailed perception. Together, they facilitate scene understanding and object recognition, showing robustness against adversarial attacks, human-like eye movements, and correspondence with human visual cortex functionality.
Similarly, my interoception model uses separate streams for different organs, converging for integrative processing to form a holistic bodily state representation. This model can explain brain-body interactions and map regions involved in interoception, suggesting a unified framework for exteroception and interoception.
CHAIR: Professor Zhongming Liu