Competency Testing

How to assess whether someone has actually mastered a skill before assigning them to critical roles.

Why This Matters

Assigning someone to a specialist role based on seniority, family connection, or self-reported skill is a common and dangerous mistake. The community’s blacksmith who cannot properly temper steel will produce brittle tools that break at critical moments. The midwife who missed basic training will cost lives. The surveyor who cannot read a compass accurately will produce land maps that create decades of disputes.

Competency testing converts the abstract question “is this person skilled enough?” into a concrete, observable answer. It protects the community from credential inflation (people claiming skills they do not have), protects trainees from being graduated before they are ready, and creates a legitimate basis for differential compensation. A person earning 40% above baseline because of certified mastery has a recognized, verifiable claim to that premium.

Testing also motivates learning. When advancement requires demonstrated competency rather than time served, trainees take their learning seriously and practice beyond the minimum.

Designing a Test

A good competency test is:

  • Observable: the evaluators can directly watch or examine what the candidate produces
  • Reproducible: if administered twice, the same candidate gets the same result
  • Valid: it actually measures the skill it claims to measure, not a proxy
  • Difficult to fake: performance under observation, with evaluators asking questions, is much harder to fake than written credentials

Design tests around tasks, not knowledge. A blacksmith candidate who can answer questions about metallurgy but cannot produce a functional hinge under observation is not a qualified blacksmith. A midwife who knows the stages of labor but cannot perform fundal assessment manually is not a qualified midwife.

For each specialist role, define three to five critical tasks that the role requires. Test each task directly. The candidate should:

  1. Perform the task from start to finish with minimal prompting
  2. Be able to explain what they are doing and why at any step when asked
  3. Recognize and describe what could go wrong and how they would respond

A candidate who can perform but not explain has mechanical skill without understanding — a problem when novel situations arise. A candidate who can explain but not perform has theoretical knowledge without practice — also a problem. Require both.

Levels of Certification

A single binary pass/fail is too coarse for most skilled roles. Use a three-level system:

Level 1 — Basic competency: can perform standard tasks reliably with appropriate tools and conditions. Qualifies for journeyman work, supervised practice, and reduced-supervision roles. Does not qualify to train others or to handle exceptional cases independently.

Level 2 — Journeyman competency: can handle the full range of expected cases, including moderately unusual situations, with confidence. Qualifies for independent work and for supervising Level 1 practitioners.

Level 3 — Master competency: can handle extreme cases, innovate within the field, train others effectively, and evaluate other practitioners. Qualifies for maximum compensation premium, training responsibility, and participation in certification panels.

A person typically achieves Level 1 at the end of a standard apprenticeship. Level 2 after 2-5 years of journeyman practice. Level 3 is rare and may represent years of additional mastery; not every practitioner reaches it.

Who Does the Testing

The evaluator panel should include at least one person with mastery-level competency in the field being tested. A panel without any highly competent evaluator cannot reliably assess high-level performance.

The evaluator should not be the apprentice’s primary trainer. The trainer has an incentive to see their trainee pass (it reflects on them) and is too familiar with the candidate’s specific strengths and weaknesses to be objective. The primary trainer can be present and can provide context, but the evaluation decision should rest with independent evaluators.

For rare skills where only one community member holds mastery, the problem is harder. Options:

  • Invite a qualified evaluator from a neighboring community for testing
  • Allow the sole master to evaluate, but require them to document their reasoning in detail so the evaluation can be scrutinized
  • Use a community panel of senior practitioners across fields who evaluate observable task outcomes even if they cannot evaluate the technical quality themselves

The panel should be at least two people for any non-trivial certification. Single-evaluator assessment introduces too much personal bias.

Record-Keeping

Document every competency evaluation: date, candidate, evaluators, tasks performed, outcome (pass/fail at each level), and any notable observations. Store records in the community’s central document repository.

These records serve multiple functions:

  • Reference if the certification is disputed later
  • Historical data showing which candidates had strong outcomes and which did not (useful for evaluating training programs)
  • Identification of patterns: if multiple graduates of the same trainer consistently fail at the same skill, the trainer’s approach needs correction

Certificates (a physical document stating that a named person achieved a named level of certification in a named field on a named date) should be issued and kept by the certified practitioner. They provide immediate evidence when the practitioner takes on a role with a new household or trades their services to a neighboring community.

Re-Testing and Recertification

Skills degrade without practice. A Level 2 practitioner who has been doing only Level 1 work for three years may no longer be able to handle the complex cases that justified their Level 2 status. Require recertification for high-stakes roles every five years, or immediately following any serious incident where the practitioner’s performance is in question.

Re-testing should be low-friction for practitioners who have been actively working at their certified level — it should confirm what they are already demonstrably doing. It should be genuinely challenging for practitioners who have been coasting or under-practicing.