Cooperative Human-AI Teaming (CHAT) Lab, directed by Dr. Abdullah Khan, conducts multidisciplinary research at the intersection of artificial intelligence (AI), machine learning (ML), and human-centered computing. Our mission is to advance the science and engineering of human-AI collaboration by developing computational frameworks that enable intelligent systems to support, complement, and enhance human decision-making and performance across diverse domains.
Our work integrates natural language processing (NLP), wearable sensing, and advanced learning architectures to build systems that adapt intelligently to human needs and contexts.
We aim to design explainable, trustworthy, and adaptive AI systems that foster seamless cooperation between humans and machines by analyzing multimodal human behavioral data and uncovering latent cognitive and interactional patterns.
Students, collaborators, and community partners: reach out to Dr. Abdullah Khan. Get
Involved to discuss openings and projects!
Dr. Abdullah Khan
470-578-4286
mkhan74@kennesaw.edu

We have the following research themes and projects:
Human-AI Collaboration for Efficient and Effective Teaming:

This project explores bidirectional human-AI collaboration, where humans and AI systems learn from each other to achieve shared goals with minimal human intervention. We design co-adaptive learning frameworks that enable mutual understanding, efficient task execution, and continuous performance improvement.
Our models leverage: Natural Language Processing (NLP), Reinforcement Learning (RL), human-in-the-loop modeling, and interactive learning paradigms for reciprocal knowledge exchange. By reducing cognitive workload and enhancing mutual situational awareness, this work seeks to create AI teammates that evolve alongside human partners, supporting dynamic, transparent, and high-trust collaboration.
Collaborative Explainable Artificial Intelligence (XAI):
We advance the emerging paradigm of collaborative explainable AI, where explanation is treated as an interactive, human-AI co-construction process rather than a one-way output. Our work develops language-driven, conversational explanation mechanisms that allow humans and AI systems to refine interpretive models of reasoning jointly.
Drawing from cognitive psychology, linguistics, and human-computer interaction, we aim to enhance: Interpretability, Accountability, and Shared Situational Understanding. This research supports trustworthy AI deployment in high-stakes domains where interpretability and collaboration are essential.
Knowledge-Guided Learning:
This project explores knowledge-guided machine learning to bridge the gap between symbolic reasoning and deep learning. We integrate domain knowledge, semantic ontologies, and knowledge graphs into data-driven models to improve generalization, robustness, and explainability under limited or noisy data.
Our approaches constrain and guide model learning using linguistic and semantic structures, allowing AI systems to reason more effectively, transfer insights across tasks, and incorporate human expertise directly into learning processes.
Behavioral Health Analysis from 911 Narratives:

This project applies advanced NLP and deep learning methods to analyze free-text narratives from emergency (911) police reports. By detecting linguistic markers of distress, crisis escalation, and mental health risk factors, we aim to support public health surveillance, crisis intervention, and first-responder, co-responder training.
Our work develops ethically responsible NLP pipelines that: Identify behavioral and emotional indicators from police narratives, develop interactive dashboards for co-responders, decriminalize mental health through providing follow-up suggestions to care access, and break the barrier. This research bridges NLP and behavioral health, informing data-driven strategies for mental-health crisis response and policy innovation.
Privacy-preserved, trustworthy Security AI:
Our research focuses on developing trustworthy, privacy-preserving, and secure AI systems that uphold integrity, transparency, and ethical responsibility. We design algorithms for federated and differentially private learning, strengthen robustness against adversarial and prompt-based attacks, and advance explainable and accountable AI frameworks for large language models (LLMs) and multimodal systems.
We aim to ensure that AI systems in sensitive domains remain secure, reliable, and aligned with human values, enabling safe and transparent human-AI collaboration.
Principal Investigator

Md Abdullah Al Hafiz Khan
Dr. Abdullah Khan is an Assistant Professor in the Department of Computer Science
at Kennesaw State University (KSU), where he founded and directs the Cooperative Human-AI
Teaming (CHAT) Lab.
His research advances human–AI teaming and natural language processing, focusing on how humans and intelligent systems can collaborate, communicate, and adapt effectively in uncertain situations across diverse domains. He earned his Ph.D. from the University of Maryland, Baltimore County (UMBC).
Ph.D. Students

Francis Nweke
Francis is a Ph.D. student at Kennesaw State University, focusing his research on Human-AI Collaboration in Explainable Artificial Intelligence (XAI). Under the guidance of Dr. Hafiz Khan, he investigates how transparent and human-centered AI systems can enhance comprehension, trust, and decision-making in high-stakes domains.
Before pursuing his Ph.D., Francis worked as a software engineer, gaining experience in software development, system design, and research-driven innovation. This technical foundation fuels his passion for creating interpretable AI systems that bridge the gap between machine intelligence and human intuition.
Learn more about Francis
Abdul Muntakim
Abdul is currently a Ph.D. student in Computer Science at Kennesaw State University, where he serves as a Graduate Research Assistant in the Cooperative Human-AI Teaming (CHAT) Lab. Previously, he worked as a Lecturer (now on study leave) at Daffodil International University, with research appointments in Natural Language Processing and Health Informatics Labs.
He earned his B.Sc. in Computer Science and Engineering from Khulna University of Engineering & Technology with high academic distinction. Abdul’s primary research interests include knowledge-guided learning, knowledge bases, human-AI teaming, large language models, federated and reinforcement learning, and natural language processing.
Research Area Keywords:
• Knowledge Guided Learning
• Human-AI Teaming
• Large Language Models (LLMs)
• Reinforcement Learning
• Natural Language Processing (NLP)

Abm Adnan Azmee
Abm Adnan Azmee is a PhD Candidate at Kennesaw State University, specializing in developing
Human-AI Collaborative methods utilizing natural language processing (NLP) and deep
learningtechniques. His research focuses on developing scalable, explainable, and
adaptive human-AI collaborative systems, validated in the high-stakes domain of behavioral
and mental health, and designed to generalize across safety-critical settings.
Adnan has a strong background as a full-stack software engineer in the industry, bringing practical experience to his academic work. He has multiple publications in various conferences and journals, highlighting his contributions to the field. Adnan’s work is recognized by several awards including “Outstanding PhD Student Research Award”, “Best Paper Award (Runner-Up)”, “NSF Travel Award” among others. His work aims to drive forward research and innovation in the fields of human-AI collaboration and NLP.
Graduate Students
Undergraduate Students
Past Students

Adnan Azmee conducts presentation at CHASE 2024!

Congratulations to our first PhD Student, Dr. Martin!

Our 5 paper published in ACM/IEEE CHASE 2024!
2025
2024