Definition
A skill packages a question, instruction set, and output expectation into a reusable research artifact. It is the unit you scale from one persona to many.
Good skills isolate one learning objective, reduce ambiguity, and make outputs comparable across runs.
Operational model
- Intent layer: what decision or hypothesis this skill should inform.
- Instruction layer: context, constraints, and response format guidance.
- Execution layer: synchronous, asynchronous, or streaming prompt delivery.
- Analysis layer: normalized responses that support cohort/population comparison.
Execution workflow
1. Scope
Define one explicit objective and a measurable success signal for the response.
2. Author
Write the skill prompt with clear framing, constraints, and expected output shape.
3. Calibrate
Run against a small persona sample to detect ambiguity, bias, or unintended leading language.
4. Scale
Deploy the same skill to full populations or studies to collect comparable outcomes.
Quality checklist
- Prompt objective is singular and decision-oriented.
- Instructions avoid assumptions that overwrite persona identity.
- Output constraints are explicit enough for downstream analysis.
- Calibration runs include diverse personas before broad launch.
- Prompt revisions are tracked between runs to preserve comparability.
Failure modes to avoid
- Combining multiple research goals in one skill, creating non-actionable outputs.
- Leading language that nudges personas toward expected answers.
- Changing prompt wording mid-study without version tracking.
- Ignoring long context blocks that degrade response focus and quality.
Related platform APIs
These APIs are the main building blocks used when this concept is put into production workflows.
/personas/promptRun a skill-style prompt synchronously.
/personas/prompt/streamStream token-level response output.
/personas/prompt/jobs/:jobIdPoll async prompt execution.
/personas/prompt/sessionsReview prompt session history.