ChromaScribe - AI-based Qualitative Data Analysis Tool
RECENT ACHIEVEMENT
Puranik, A., Chen, E., Peiris, R. L., & Kong, H.-K. (2025). Not a Collaborator or a Supervisor, but an Assistant: Striking the Balance Between Efficiency and Ownership in AI-incorporated Qualitative Data Analysis. https://doi.org/10.48550/arXiv.2509.18297
UX Research Study


(Funded by NSF)
Overview
Qualitative research is a multifaceted process involving data collection, transcription, organization, coding, and thematic analysis. As qualitative data analysis (QDA) tools gain popularity, artificial intelligence (AI) is increasingly integrated to automate aspects of this workflow. This study investigates researchers’ QDA practices, their experiences with existing tools, and their perspectives on AI involvement in qualitative research, bias reduction, usage of multimodal data and their preferences among human-initiated coding, AI-initiated coding and traditional coding.
I conducted in-depth interviews with 16 qualitative researchers, via Zoom. This also involved thematic coding, data analysis and usability tests for our AI-based QDA tool - ChromaScribe. Lastly, I recommend 3 design features for AI-based QDA tools, to improve trust in AI, transparency in theme generation and improved team collaboration.
Timeline
1.5 years
My Role
UX Researcher
Usability Testing
Team
1 UX Researcher
1 UX Designer
1 Backend Developer
1 Frontend developer
2 RIT HCI Professors
Tools
Atlas.ti
Zoom
Otter.ai
Qualtrics
Research Questions
RQ 1.a. - How do researchers perceive the effectiveness of current qualitative coding tools and the impact on their qualitative analysis practices?
RQ 1.b. - How effectively does ChromaScribe align with participants' desired features and address limitations commonly identified in existing QDA tools?
RQ 2 - How do researchers perceive human-AI collaboration in qualitative analysis and its effectiveness in mitigating bias?
RQ 3 - To what extent do researchers utilize multimodal data in their qualitative data analysis?
Research Insights
Trust in AI-assisted coding
Participants appreciated AI’s efficiency but expressed skepticism due to its lack of contextual understanding and transparency. Trust in AI was conditional, with many seeking explanations behind AI-generated codes before relying on them.
Ownership of codes
Researchers preferred traditional and AI-initiated coding methods where they retained final control. They emphasized that coding is an interpretive act central to their identity as researchers, and delegating this to AI felt like a loss of ownership.
Collaboration as a key to reduce bias
Most participants agreed that bias is best mitigated through collaboration—either with other researchers or AI tools used as supportive “second eyes.” However, they stressed that AI alone is not sufficient for bias reduction without human judgment.
Limited usage of multimodal data
Most participants agreed that bias is best mitigated through collaboration—either with other researchers or AI tools used as supportive “second eyes.” However, they stressed that AI alone is not sufficient for bias reduction without human judgment.