KENNESAW, Ga. | Mar 11, 2026

Kennesaw State University researcher Hansol Rheem is exploring how virtual reality and robotic teammates could help prepare emergency responders for those moments. A cognitive psychologist and human factors engineer, Rheem’s research focuses on improving human–AI teamwork, particularly in high-anxiety, time-pressured situations scenarios.
Rheem’s study examines how people train and perform when paired with a robot teammate in a simulated emergency scenario, where robots help with triaging. The aim is to find out whether this technology improves learning, but the research also explores the psychological aspect of how framing the robot’s role as an observer, collaborator, or competitor changes the way people use the robot as a learning companion.
“Mass casualty triage has been at the center of interest in the human factors community because of its unique situation,” said Rheem, an assistant professor of psychology in the Norman J. Radow College of Humanities and Social Sciences. “These situations are complex and spontaneous, so effective planning and training are critical.”
Training for these high-pressure situations are usually through lecture-based instruction or live simulations staged in large spaces such as gyms, where volunteers pose as injured victims.
“Lecture-based training is efficient, but it’s not as interactive or engaging as it should be,” Rheem said. “Live simulations are more realistic and effective, but they take a lot of time and money to set up.”
Rheem and his team designed a video game that recreates the aftermath of a mass casualty event. Participants navigate the scenario either on a computer-based or virtual reality version of the game, examining patients’ vital signs and making triage decisions. They are partnered with a robot teammate that provided subtle hints during the simulation to help participants categorize and prioritize patients.
Study participants were divided into three groups. A control group called the observer group was told the robot was operated remotely by another human. The collaboration group was told the robot was powered by artificial intelligence, capable of thinking independently and working with them as a collaborator. The third group, the competitor group, was told they were competing against the AI-powered robot.
Initially, Rheem had expected that participants in the collaborator and competitor groups who believed the robot was autonomous would treat it as a true collaborator, pay more attention to its message, and show the greatest learning gains.
Control and Collaborator group attended to the robot’s message more frequently and felt more connected to it than Competitor group participants.
Participants in the observer group who believe the robot was controlled by a human, showed the greatest learning gain. They also attributed their success to the robot and perceived its intentions to be clearer. The collaborator group was more inclined to blame failures on the robot.
Rheem relates these findings to the psychological concept of trust in AI, which refers to people’s attitudes about the extent to which an AI agent will help them accomplish their goals.
“When we believe the robot is controlled by a human, we may set lower expectations than we would for an autonomous robot. Expectations for an autonomous robot can sometimes be unrealistic, much like how many people expect near-perfect performance from systems like ChatGPT or Gemini.
“But when those expectations are not met, for example when the AI gives a hint rather than a specific answer, trust in the AI can drop quickly and we may become more inclined to blame the AI and ignore its advice.”
“Working on this project allowed me to apply what I’ve learned in class to real research, from collecting data to analyzing results,” said Drey Bailey, a psychology junior who worked alongside Rheem on the study. “It’s interesting to see how this study can open the door to other types of VR training designed to improve decision-making in high-anxiety, time-pressured situations.”
By understanding how trust and the way an AI’s role is framed can improve collaboration with AI, Rheem hopes this study will progress to preparing professionals to work confidently alongside intelligent machines.
“As AI becomes more integrated into society, we’re going to see more hybrid teams which include both humans and artificial intelligence systems working together,” Rheem said. “We’re already seeing it with self-driving cars. When the system makes a mistake, the human has to step in. If we can design training that strengthens both expertise and collaboration with AI, that’s a significant step forward. It’s about preparing them for a future where they’re asked to coordinate closely with AI systems. We need to be ready when that day comes.”
The team is now expanding the study to test various methods help learners gradually develop the right level of trust in the AI, so they are more likely to listen to its advice.
– Story by Christin Senior
Photos by Darnell Wilburn
A leader in innovative teaching and learning, Kennesaw State University offers undergraduate, graduate, and doctoral degrees to its more than 51,000 students. Kennesaw State is a member of the University System of Georgia with 11 academic colleges. The university's vibrant campus culture, diverse population, strong global ties, and entrepreneurial spirit draw students from throughout the country and the world. Kennesaw State is a Carnegie-designated doctoral research institution (R2), placing it among an elite group of only 8 percent of U.S. colleges and universities with an R1 or R2 status. For more information, visit kennesaw.edu.