Quick Start
Get your first AI coding results in under 5 minutes. This guide walks you through the essential steps to code your survey open-ends.
New to qualcode.ai? Every new account can receive up to 500 free credits: 50 immediately, plus 450 after email verification. That's enough to code approximately 330 responses at Standard quality. No credit card required.
Prerequisites
Before you begin, make sure you have:
- A qualcode.ai account (free signup at qualcode.ai/auth/register)
- A CSV or Excel file (.xlsx, .xls) with your open-ended survey responses
New here? We've already created a sample project with 30 product feedback responses and a matching coding guide in your account. Open your Dashboard to start immediately!
Step 1: Open Your Sample Project
From your Dashboard, you'll see the Sample Project - Product Feedback we created for you. Click to open it and you'll find 30 sample responses ready to code.
To work with your own data, click + New Project to create a new project, then upload your CSV or Excel file.
Organization tip: Projects group related data files and coding runs together. Create one project per study or research question.
Step 2: Upload Your Data
Once your project is created, upload your data file:
- Click Upload Data File or drag and drop your file
- qualcode.ai will preview your data and detect columns automatically
- Verify that your data looks correct in the preview
Supported formats: CSV, Excel (.xlsx, .xls). Maximum file size: 50,000 rows.
Step 3: Select the Column to Code
Choose the column containing your open-ended responses. This is the text that will be classified by the AI raters.
- Only text columns are available for selection
- You can code multiple columns by running separate coding runs
- Each response in the column will receive one or more category codes
Step 4: Choose a Coding Guide
A coding guide defines the categories you want to assign to responses. You have two options:
Use an Existing Guide
If you have previously created a coding guide, you can reuse it. Guides are not tied to specific projects. New accounts come with a "Sample - Product Feedback" guide that works with our sample data file.
Create a New Guide
Click Create New Guide and define your categories:
- Give your guide a name
- Add categories with clear names and descriptions
- Choose single-label or multi-label mode
- Optionally add training examples to improve accuracy
No training data needed: qualcode.ai works in "zero-shot" mode using just your category descriptions. You can add training examples later to improve accuracy.
Let AI Suggest Categories
Not sure what categories to use? Try the Auto-Suggest feature. qualcode.ai will analyze your responses using two independent AI models and suggest categories based on common themes in your data.
Look for the Suggest Categories button (with a sparkle icon) when starting a coding run. See Auto-Suggest Coding Guide for details.
Step 5: Run Coding
Click Start Coding to begin. You'll see:
- Real-time progress with percentage complete
- Estimated time remaining based on current progress
- Credit cost displayed before you confirm
Both AI raters (OpenAI and Anthropic) will independently classify each response. This typically takes a few minutes depending on your dataset size.
Step 6: Review Results
When coding completes, you can:
- View agreement metrics: Cohen's Kappa, percent agreement, and Krippendorff's Alpha
- See code distributions: How responses are distributed across categories
- Review disagreements: Cases where the two AI raters assigned different codes
- Export results: Download CSV or SPSS-ready files with coded data
Disagreements are normal: Even human coders disagree. The dual-rater approach lets you measure and report reliability objectively. Use the reconciliation interface to resolve disagreements.
What's Next?
Now that you've completed your first coding run, explore these resources to get more from qualcode.ai:
- Key Concepts - Understand how projects, guides, and the dual-rater methodology work
- Coding Guide Best Practices - Design categories that maximize agreement
- Agreement Calculation - Learn how inter-rater reliability metrics are calculated
- Export Formats - Understand your export options for analysis