3:00 pm - 5:00 pm Implicit Design Choices and Their Impact on Emotion Recognition Model Development and Evaluation May 10, 2023 @ 3:00 pm - 5:00 pm Implicit Design Choices and Their Impact on Emotion Recognition Model Development and Evaluation
10:00 am - 12:00 pm A Statistical Approach to Stochastic Computing Design and Analysis May 15, 2023 @ 10:00 am - 12:00 pm A Statistical Approach to Stochastic Computing Design and Analysis
12:00 pm - 2:00 pm Electronic, optical, and excitonic properties of atomically thin semiconductors May 15, 2023 @ 12:00 pm - 2:00 pm Electronic, optical, and excitonic properties of atomically thin semiconductors
12:30 pm - 2:30 pm Coherent Spatial and Temporal Combining of Femtosecond Fiber Lasers at the Storage Energy Limit Enabling High-Power Drivers of Laser Plasma Accelerators and Other Secondary Radiation Sources May 17, 2023 @ 12:30 pm - 2:30 pm Coherent Spatial and Temporal Combining of Femtosecond Fiber Lasers at the Storage Energy Limit Enabling High-Power Drivers of Laser Plasma Accelerators and Other Secondary Radiation Sources
3:00 pm - 5:00 pm Where are the Humans in Human-AI Interaction: The Missing Human-Centered Perspective on Interpretability Tools for Machine Learning May 30, 2023 @ 3:00 pm - 5:00 pm Where are the Humans in Human-AI Interaction: The Missing Human-Centered Perspective on Interpretability Tools for Machine Learning
May 10, 2023 @ 3:00 pm - 5:00 pm Implicit Design Choices and Their Impact on Emotion Recognition Model Development and Evaluation
May 15, 2023 @ 10:00 am - 12:00 pm A Statistical Approach to Stochastic Computing Design and Analysis
May 15, 2023 @ 12:00 pm - 2:00 pm Electronic, optical, and excitonic properties of atomically thin semiconductors
May 17, 2023 @ 12:30 pm - 2:30 pm Coherent Spatial and Temporal Combining of Femtosecond Fiber Lasers at the Storage Energy Limit Enabling High-Power Drivers of Laser Plasma Accelerators and Other Secondary Radiation Sources
May 30, 2023 @ 3:00 pm - 5:00 pm Where are the Humans in Human-AI Interaction: The Missing Human-Centered Perspective on Interpretability Tools for Machine Learning