top of page

ChromaScribe

Qualitative data analysis AI tool

Screenshot 2024-02-25 at 9.12.21 PM.png
NSF_Official_logo.png

Recent acheivement

Recently, my research paper got selected at a publishing conference. If you're interested, you can read it below.

Puranik, A., Chen, E., Peiris, R. L., & Kong, H.-K. (2025). Not a Collaborator or a Supervisor, but an Assistant: Striking the Balance Between Efficiency and Ownership in AI-incorporated Qualitative Data Analysis.

My Role

Lead UX Researcher 
UX Designer

Timeline

1.5 years

Tools

Otter.ai
Atlas.ti
Qualtrics
Zoom
Figma

Team

1 UX Researcher
2 UX Designer
2 Software Developers
2 Committee Members

Product

ChromaScribe: AI-based qualitative data analysis tool

Methods

Focus group interviews
Thematic analysis
In-depth 1:1 interviews
Codebook creation
Axial coding
Prototype usability testing

Overview

Qualitative research is a multifaceted process involving data collection, transcription, organization, coding, and thematic analysis. As qualitative data analysis (QDA) tools gain popularity, artificial intelligence (AI) is increasingly integrated to automate aspects of this workflow. This study investigates researchers’ QDA practices, their experiences with existing tools, and their perspectives on AI involvement in qualitative research, bias reduction, usage of multimodal data and their preferences among human-initiated coding, AI-initiated coding and traditional coding.

I conducted in-depth interviews with 16 qualitative researchers, via Zoom. This also involved thematic coding, data analysis and usability tests for our AI-based QDA tool - ChromaScribe. Lastly, I recommend 3 design features for AI-based QDA tools, to improve trust in AI, transparency in theme generation and improved team collaboration.

The project began with these research question: 

RQ 1.a. -  How do researchers perceive the effectiveness of current qualitative coding tools and the impact on their qualitative analysis practices?
RQ 1.b. - How effectively does ChromaScribe align with participants' desired features and address limitations commonly identified in existing QDA tools?
RQ 2 - How do researchers perceive human-AI collaboration in qualitative analysis and its effectiveness in mitigating bias?
RQ 3 - To what extent do researchers utilize multimodal data in their qualitative data analysis?

Problem

  • Current qualitative data analysis tools are cumbersome, costly, and have steep learning curves.

  • Researchers often deal with high cognitive load, disorganized workflows, and time-consuming manual coding.

  • AI tools exist, but researchers struggle with:

    • Trust

    • Transparency

    • Bias

    • Control/autonomy over their analysis

Opportunities

  • Build a QDA tool that reduces workload without reducing researcher ownership.

  • Support multimodal data (audio, text, video), something current tools lack.

  • Explore the ideal balance between AI assistance and human judgment.

My Research Process

Red and White Minimalist Organizational Chart.png

How did I do it?

Part 1: Getting participants

Recruitment​

  • Reached out to 70+ professionals & PhD researchers across sociology, anthropology, psychology, cybersecurity, and HCI.​

  • Used LinkedIn, university websites, professional networks.

  • Shortlisted participants using a pre-study Qualtrics survey.

Scheduling

  • Coordinated interview times through Qualtrics.

  • Sent consent forms prior to each session.

  • Conducted 1.5 hour Zoom sessions with each participant.

I recruited 16 participants who were experienced qualitative researchers, for this study.

Part 2: Interviews

1. Pre-study survey

Background, experience, tools, shortlist participants

2. Interview

Current QDA workflow, tools, frustrations, AI perceptions

3. Demo + tasks

Using the ChromaScribe prototype

4. Post-study survey

Feedback on usability & AI collaboration

Part 3: Data Analysis

Transcript Cleaning

Cleaned 1,350+ minutes of recordings using Zoom + Otter.ai.

Codebook Development

Iteratively built a detailed codebook based on emerging themes.

Thematic Analysis

Coded all interview transcripts in Atlas.ti. Organized insights in Google Docs + spreadsheets.

Findings

Part 1: How people use QDA tools today?

Researchers use QDA tools for:

Streamlining analysis

Visualizing

Storing & organizing data

Pattern finding

Note-taking

Top reasons people avoid QDA tools:

Data confidentiality concerns

High subscription costs

Complex UI; high learning curve

Time required to set up projects

Lack of collaboration features

Part 2: Coding Preferences: Traditional vs AI-initiated vs Human-Initiated

Traditional coding is a fully human-driven process where researchers manually read, interpret, and label qualitative data.
  • Deep immersion → better understanding

  • High reliability

  • Takes the most time

7 participants chose traditional coding as their top preference due to these reasons: 

Part 3: Multimodal Data Use

Text transcripts are primary

Audio helps with emotion and tone.

Video helps with non-verbal cues & contextual details.

ChromaScribe’s audio-text linking feature (e.g., clicking a transcript word jumps to audio) was especially loved.

ChromaScribe Prototype Feedback

Screenshot 2025-04-08 at 11.14.06 PM.png

Participants found AI-generated codes helpful for jumpstarting their analysis. Still, they desired more transparency on how the themes were generated to feel confident using them.

Screenshot 2025-04-08 at 11.14.52 PM.png

The color-coded visualizations helped users easily spot patterns across the transcript. 

Screenshot 2025-04-08 at 11.15.02 PM.png

Participants appreciated the tool’s search and filter functionalities, which helped them quickly locate specific themes or participant segments.

Design recommendations for AI-based QDA tools

1. Explainable AI Coding with Interactive Justifications

To build trust, AI-generated codes should be accompanied by clear, interactive justifications. For example, when a user hovers over a code, the tool should highlight the exact transcript excerpts that influenced that code and display a short rationale, helping researchers understand why the AI made a particular decision.

2. Human-in-the-Loop Coding Validation System

To support researcher ownership and interpretive control, the tool should adopt a human-in-the-loop model where AI suggestions remain editable and must be explicitly confirmed by the user. A dual-pane interface, one showing AI suggestions, the other for human codes, can allow easy comparison and validation before codes are finalized.

3. Collaborative Coding Interface with Role-Based Access

Participants expressed a strong need for teamwork in analysis. The tool should offer real-time collaboration features such as role-based access (e.g., reviewer, coder), comment threads on codes, and live updates of coding progress. This not only enhances collaboration but also supports bias mitigation by incorporating diverse perspectives into the analysis process.

If you've made it till here, seems like you're looking for someone who's curious, collaborative and not afraid of complex problems. Well, you've literally just scrolled past the perfect candidate.

 

If you have an interesting project or are hiring..

© Anoushka Puranik MySite

Let's connect!

bottom of page