Quick Start

Get dual-rater coded results with full inter-rater reliability metrics in under 5 minutes. Upload your data, pick your categories, and let two independent AI raters code every response in isolation.

New to qualcode.ai? Every new account can receive up to 500 free credits: 50 immediately, plus 450 after email verification. That's enough to code approximately 330 responses at Standard quality. No credit card required.

Prerequisites

Before you begin, make sure you have:

  • A qualcode.ai account (free signup at qualcode.ai/register)
  • A CSV or Excel file (.xlsx, .xls) with your open-ended survey responses

New here? After signup, qualcode.ai creates a sample project with 30 product feedback responses and a matching coding guide. You can use this guide as a preview of the workflow before you upload your own data.

Step 1: Create an Account or Sign In

If you are not signed in yet, start by creating an account at qualcode.ai/register. If you already have an account, sign in and open your workspace.

New accounts include a Sample Project - Product Feedback with 30 example responses and a matching coding guide. You can open that sample first, or create a new project for your own study right away.

Organization tip: Projects group related data files and coding runs together. Create one project per study or research question.

Step 2: Upload Your Data

Once you are inside a project, upload your data file:

  1. Click Upload Data File or drag and drop your file
  2. qualcode.ai will preview your data and detect columns automatically
  3. Verify that your data looks correct in the preview

Supported formats: CSV, Excel (.xlsx, .xls). Maximum file size: 50,000 rows.

Step 3: Select the Column to Code

Choose the column containing your open-ended responses. This is the text that will be classified by the AI raters.

  • Only text columns are available for selection
  • You can code multiple columns by running separate coding runs
  • Each response in the column will receive one or more category codes

Step 4: Choose a Coding Guide

A coding guide defines the categories you want to assign to responses. You have two options:

Use an Existing Guide

If you have previously created a coding guide, you can reuse it. Guides are not tied to specific projects. New accounts also come with a "Sample - Product Feedback" guide that works with the included sample dataset.

Create a New Guide

Click Create New Guide and define your categories:

  1. Give your guide a name
  2. Add categories with clear names and descriptions
  3. Choose single-label or multi-label mode
  4. Optionally add training examples to improve accuracy — reconciled disagreements from previous runs feed back as training data automatically, so the system sharpens with each coding cycle

No training data needed: qualcode.ai works in "zero-shot" mode using just your category descriptions. You can add training examples later to improve accuracy.

Let AI Suggest Categories

Not sure what categories to use? Try the Auto-Suggest feature. qualcode.ai will analyze your responses using two independent AI models, then run a third semantic merge pass to suggest a cleaner starting codebook based on common themes in your data.

This is especially useful when the hardest part is getting from a blank page to a first defensible set of categories. Look for the Suggest Categories button (with a sparkle icon) when starting a coding run. See Auto-Suggest Coding Guide for details.

Step 5: Run Coding

Click Start Coding to begin. You'll see:

  • Real-time progress with percentage complete
  • Estimated time remaining based on current progress
  • Credit cost displayed before you confirm

Both AI raters (OpenAI and Anthropic) independently classify each response in its own isolated API call — no cross-contamination or order effects between responses. This typically takes a few minutes depending on your dataset size.

Step 6: Review Results

When coding completes, you can:

  • View agreement metrics: Cohen's Kappa, percent agreement, and Krippendorff's Alpha
  • See code distributions: How responses are distributed across categories
  • Review disagreements: Cases where the two AI raters assigned different codes
  • Export results: Download CSV or SPSS-ready files with coded data

Disagreements are normal: Even human coders disagree. The dual-rater approach lets you measure and report reliability objectively. Use the reconciliation interface to resolve disagreements — your decisions automatically become training data that sharpens future runs.


What's Next?

Now that you've completed your first coding run, explore these resources to get more from qualcode.ai: