Template
Methods section template for qualcode.ai
Copy a publication-ready methods paragraph that describes the per-response isolated dual-rater coding, agreement metrics, and reconciliation process. Replace the bracketed placeholders with your study details.
What to customize
| Placeholder | Replace with |
|---|---|
| N | The number of open-ended responses coded. |
| model name | The actual OpenAI and Anthropic models used in your run. |
| framework | The theory, literature, or inductive basis for your categories. |
| agreement values | Your kappa, alpha, percent agreement, and interpretation. |
| reconciliation | Who reviewed disagreements and how final decisions were made. |
Copy this draft
Open-ended survey responses (N = [X]) were coded using qualcode.ai (https://qualcode.ai), an AI-assisted qualitative coding platform. Two independent large language models (OpenAI [model name] and Anthropic [model name]) coded each response in per-response isolated API calls (no shared context between responses), enabling calculation of inter-rater reliability. Models were configured with temperature 0.0 for classification tasks. Categories were defined by the research team based on [theoretical framework / prior literature / inductive analysis]. [N] training examples were provided to guide the AI classification. Inter-rater agreement between the two AI raters was [interpretation] (Cohen's κ = [value]; Krippendorff's α = [value]; percent agreement = [value]%). Disagreements were reviewed and reconciled by [research team / independent coders], and the final category assignments were used for subsequent analysis.
Tip: keep the description specific. Reviewers usually care less about branding language and more about who coded, how agreement was measured, and how disagreements were handled.
Recommended structure
- State what was coded and how many responses were included.
- Describe the dual-rater workflow and the models used.
- Explain how you defined the categories and whether training data was used.
- Report the agreement statistics and what happened when raters disagreed.
Need a shorter version?
If you only need one sentence for a methods section or appendix, use this summary: "Responses were coded using qualcode.ai, where two independent AI raters coded each response in isolated API calls; inter-rater agreement was measured via Cohen's kappa, and disagreements were reconciled by the research team."