Frequently Asked Questions

Answers to common questions about qualcode.ai. Can't find what you're looking for? Contact us at support@qualcode.ai.

Getting Started

How many free credits do I get?

Every new account can receive up to 500 free credits: 50 credits immediately on signup, and 450 more after verifying your email. That's enough to code approximately 330 responses at Standard quality. No credit card required to sign up.

What file formats are supported?

qualcode.ai supports:

  • CSV files: Comma-separated values (UTF-8 encoding recommended)
  • Excel files: .xlsx and .xls formats

Maximum file size: 50,000 rows. For larger datasets, split your file or contact us about enterprise solutions.

Do I need training data to start?

No. Training data is always optional. qualcode.ai works in "zero-shot" mode using just your category names and descriptions. You can add training examples later to improve accuracy for specific categories.

Best practice: Start without training data to see how well the AI interprets your categories. Add examples only for categories that show consistent misclassifications.

How does AI category suggestion work?

The Auto-Suggest feature analyzes your survey responses to identify common themes and suggest coding categories. It uses a three-AI workflow: two independent AI models (OpenAI and Anthropic) analyze your data separately, then a third AI pass semantically merges overlapping suggestions into a cleaner starting codebook. Categories identified by both independent raters are marked as high confidence; categories found by only one are marked as lower confidence.

You review all suggestions, edit or delete categories as needed, and then create a coding guide from the refined suggestions. See Auto-Suggest Coding Guide for the full guide.


Methodology

Why do you use two AI models?

Two independent AI models from different providers code every response in per-response isolated API calls — no shared context, no order effects, no priming bias. This mirrors traditional inter-rater reliability (IRR) studies and gives solo researchers the same dual-rater credibility that traditionally required hiring a second human coder. This approach:

  • Provides agreement metrics (Cohen's Kappa, Krippendorff's Alpha) that reviewers expect
  • Flags disagreements for human review, improving final accuracy
  • Guarantees genuine independence: different architectures, different training data, and per-response isolation with no shared state between responses
  • Meets academic standards for methodological rigor

Is AI-generated coding guide academically valid?

Yes, when used appropriately. The Auto-Suggest feature uses two independent AI analyses plus a semantic merge pass to produce a structured starting codebook. Categories agreed upon by both independent models represent robust themes. More importantly:

  • AI suggestions are a starting point, not a final answer
  • Human researchers review, edit, and approve all categories
  • The final coding guide is a human-curated product informed by AI analysis
  • Transparency about provenance allows proper methodological reporting

This approach mirrors how researchers use software tools for initial thematic analysis while retaining human judgment for final decisions.

What if the AI misses important categories?

AI suggestions are not exhaustive. If important themes from your research questions are missing:

  • Add categories manually after reviewing AI suggestions
  • Try running auto-suggest in Thorough mode for deeper analysis and more nuanced detection
  • Use a larger sample of responses if available

The auto-suggest feature is designed to accelerate category development, especially when getting to a first codebook is the hardest part, not replace researcher expertise.

Can I edit AI-suggested categories?

Absolutely. The review interface allows you to:

  • Rename categories to match your terminology
  • Edit descriptions for clarity
  • Merge similar categories
  • Delete categories that are not relevant
  • Add or remove example responses

The goal is to produce a coding guide that reflects your research needs, using AI as a starting point.

Can I cite qualcode.ai results in academic publications?

Yes. The dual-rater methodology is specifically designed for academic credibility. Suggested methods section text:

"Open-ended survey responses (N = [X]) were coded using qualcode.ai, an AI-assisted qualitative coding platform employing a dual-rater methodology. Two independent large language models coded each response, enabling calculation of inter-rater reliability (Cohen's κ = [X]; Krippendorff's α = [X]). Disagreements were reviewed and reconciled by the research team."

How accurate is AI coding?

Accuracy depends on several factors:

  • Category clarity: Well-defined, mutually exclusive categories achieve higher accuracy
  • Training data: Examples improve accuracy for nuanced or domain-specific categories
  • Response complexity: Simple, focused responses are coded more accurately than long, multi-topic ones

Accuracy varies based on category clarity, training data quality, and response complexity. The dual-rater approach lets you measure reliability objectively rather than guessing. And because reconciled disagreements feed back as training data, accuracy improves measurably with each coding cycle.

When should I use Kappa vs Alpha?

qualcode.ai calculates both Cohen's Kappa (κ) and Krippendorff's Alpha (α) automatically. The choice of which to report depends on your field and publication venue:

  • Cohen's Kappa: More widely recognized in psychology and general research. Uses the Landis-Koch interpretation scale (0.61-0.80 = "substantial").
  • Krippendorff's Alpha: Preferred in communication research and content analysis. Uses stricter thresholds (≥0.80 = "reliable").

The key difference: Alpha handles missing data better. In qualcode.ai, Alpha includes unclassifiable responses (treated as missing data), while Kappa excludes them. For thorough reporting, include both metrics with their respective interpretation scales.

When in doubt: Report both metrics. They measure the same concept (inter-rater reliability) with different assumptions, and including both demonstrates methodological rigor.

What AI models do you use?

qualcode.ai uses different models for coding runs vs. category suggestions:

Coding Runs

For coding runs, you can choose a model tier based on your needs:

Tier OpenAI (Rater A) Anthropic (Rater B)
Budget GPT-4.1-nano Claude 3 Haiku
Standard GPT-4.1-mini Claude Haiku 4.5
Quality GPT-4.1 Claude Sonnet 4.5

Auto-Suggest (Category Discovery)

Auto-suggest always uses premium models for the highest quality suggestions, plus a third semantic merge pass to reconcile overlapping category ideas:

Role Provider Model
Rater A OpenAI GPT-5.2 (with extended reasoning)
Rater B Anthropic Claude Opus 4.5 (with extended thinking)
Semantic merge qualcode.ai workflow Third AI pass for semantic reconciliation

There is no model tier selection for auto-suggest—it uses the best available suggestion workflow automatically.

What temperature settings do you use for the AI models?

We use temperature 0.0 for all classification tasks. Temperature controls how deterministic or random AI outputs are — lower values produce more consistent results.

This choice is based on:

  • Academic research: Studies show that for classification tasks, temperature 0.0–1.0 produces no significant accuracy difference, but lower temperatures maximize reproducibility (Renze & Guven, 2024, EMNLP Findings).
  • Provider guidance: Both OpenAI and Anthropic recommend temperature near 0.0 for "analytical" and "multiple choice" tasks, which includes classification.
  • Qualitative coding research: A study specifically on LLM-based qualitative coding found that only temperatures ≤0.5 showed statistically reliable accuracy improvements (Soria et al., 2025, arXiv:2507.11198).

Note: Even at temperature 0.0, AI outputs are not perfectly deterministic due to technical factors like floating-point arithmetic. This is why inter-rater agreement metrics (Kappa, Alpha) are the true measure of reliability — not temperature alone.


Data & Privacy

Where is my data stored?

All data is stored in EU data centers (Frankfurt, Germany). We are fully GDPR compliant. For AI classification, survey text is transmitted to US-based providers (OpenAI, Anthropic) under EU Standard Contractual Clauses. Data is processed transiently and not retained by AI providers. See our Trust Center for details.

Is my data used to train AI models?

No. We use OpenAI and Anthropic's enterprise APIs with data processing agreements that explicitly prohibit training on customer data. Your survey responses are processed for coding only and are never used to improve AI models.

How long is data retained?

Data is retained while your account is active. You can delete projects and data files at any time. The deletion process:

  • Deleted items move to Trash for 30 days (recoverable)
  • After 30 days in Trash, data is automatically and permanently deleted by our daily cleanup process
  • You can empty Trash manually at any time for instant permanent deletion

Can I get a Data Processing Agreement (DPA)?

Yes. Institutional customers receive a DPA as part of their license agreement. Individual researchers can request a DPA by contacting sales@qualcode.ai.

For German institutions: Our processing is compliant with DSGVO requirements. We can provide documentation for your data protection officer upon request.

What access logs do you keep?

For security and compliance, we maintain access logs for:

  • Authentication events: Successful and failed login attempts
  • Admin actions: User management and configuration changes
  • Data exports: When coding results are downloaded

Access logs are retained for 90 days and are used only for security monitoring and responding to data subject requests. They do not contain survey response content.


Credits & Billing

How are credits calculated?

Credits are calculated using an additive formula:

training_overhead = ceil(min(training_tokens, 100,000) / 10,000)
base_cost = responses + training_overhead
total_credits = ceil(base_cost x tier_factor)
  • Base cost: Number of responses plus training data overhead (0-10 credits max)
  • Tier factor: Budget (1.0x), Standard (1.5x), or Quality (3.0x)
  • Training overhead: A small additive cost based on how much training data you include

The exact credit cost is always shown before you confirm a coding run. See Credits for detailed examples.

Do credits expire?

Free credits expire 12 months after issuance if unused. Purchased credits never expire while your account remains active. There are no monthly subscriptions or use-it-or-lose-it policies for purchased credits.

Can I get a refund?

Unused credits can be refunded within 30 days of purchase. Contact support@qualcode.ai with your refund request.

For coding runs that fail due to system errors, unused credits are automatically refunded to your account. Cancelled coding runs are refunded pro-rata based on the unprocessed portion of the job.

Do you offer academic discounts?

Institutional annual licenses include volume pricing. Universities and research institutes can contact sales@qualcode.ai for annual licensing options starting at EUR 2,500/year.

Individual researchers can start with free credits or use pay-as-you-go credit packages with no subscription required.


Technical

What browsers are supported?

qualcode.ai works best in modern browsers:

  • Google Chrome (latest)
  • Mozilla Firefox (latest)
  • Apple Safari (latest)
  • Microsoft Edge (latest)

Mobile browsers work for reviewing results, but we recommend desktop for the best experience when creating coding guides and reconciling disagreements.

Is there an API?

API access is planned for a future release. Currently, qualcode.ai is web-only. If you have specific integration needs, contact us to discuss your requirements.

Can I export to SPSS?

Yes. Export options include:

  • CSV: Compatible with Excel, R, Python, and most analysis tools
  • SPSS-ready: Formatted for direct import into SPSS with variable labels

See Export Formats for detailed information about export options.


Support

How do I report a bug?

You can report bugs in two ways:

Please include: what you were trying to do, what happened instead, and any error messages you saw. Screenshots are helpful.

Is there a status page?

Yes. Check status.qualcode.ai for real-time system status and scheduled maintenance announcements.

How quickly do you respond to support requests?

We aim to respond to all support requests within 1-2 business days. Complex technical issues may take longer to investigate and resolve.

Check the docs first: Many questions are answered in our documentation. Use the navigation menu to explore topics, or contact us if you can't find what you need.


Still Have Questions?

We're here to help. Contact us at support@qualcode.ai and we'll get back to you within 1-2 business days.