Productivity Gains

Measuring and documenting improvements in output per unit of labor as specialization matures.

Why This Matters

Productivity gains are how specialization pays for itself. The specialist who earns above-baseline compensation must produce enough additional value to justify that premium and still leave the community better off. Without measuring productivity, the community cannot verify whether its specialization investments are paying off — it simply believes they are, or it disputes them, or it maintains them by inertia regardless of evidence.

Measurement also creates accountability and improvement incentive. A blacksmith who knows that tool-per-day output is tracked has a concrete productivity target to work toward. A crop planner whose yield-per-hectare figures are documented year-over-year can see their own improvement, and the community can see whether their planning is producing results.

Most importantly, productivity data drives future investment decisions. Where are the largest gains from additional specialization or capital investment? Where have gains plateaued and additional investment would be wasted? Data answers these questions; intuition and politics produce unreliable answers.

What to Measure

Every specialist role has one to three primary output metrics that capture its core contribution:

Blacksmith/metalworker: tools produced per month (broken down by type), tools repaired per month, percentage of community tool needs met without importing.

Farmer/agricultural coordinator: yield per hectare (by crop, by field), caloric production per labor-hour invested, percentage of planned planting completed on schedule.

Potter/ceramicist: vessels produced per month (by type), defect rate (percentage that crack or fail), percentage of community container needs met.

Healer/medical care: cases treated per month (rough categories: minor injury, major injury, illness, pregnancy), adverse outcome rate (cases that worsened under care), response time to urgent cases.

Teacher: enrollment (number of students), advancement rate (percentage meeting grade-level standards), literacy rate in the community’s under-20 population.

Trade coordinator: total value of goods traded (in common units), favorable exchange rate achievement (what percentage of trades met or exceeded reference rates), number of active trade relationships.

Do not over-measure. Three metrics per role is sufficient. More creates administrative overhead and metrics games (optimizing for the measured metrics while neglecting unmeasured ones). Pick the metrics that most directly reflect the role’s core contribution.

Calculating Productivity

Productivity = output / input. For most community roles, input is time (labor-hours) and output is units of goods or services produced.

Example: the blacksmith works 200 labor-hours in a month and produces 40 tools. Productivity = 40 tools / 200 hours = 0.2 tools per hour. Next month, after purchasing a new hammer and improving their forge layout, the blacksmith produces 50 tools in the same 200 hours. New productivity = 0.25 tools per hour. Productivity improvement = 25%.

This number has meaning only in context. Is 0.2 tools/hour good or poor? Compare it to: what was produced before a dedicated blacksmith existed, what neighboring communities’ smiths produce, what the community needs to maintain its tool inventory. Context transforms a raw number into useful information.

Documenting and Using Productivity Data

Record productivity monthly or quarterly for active specialist roles. A simple table: date, role, output quantity, labor-hours, and the resulting productivity ratio. Store in the community’s central records.

Use productivity data in four ways:

Annual specialist review: compare this year’s productivity to last year’s. If it is improving, the specialist is developing their skills. If it is flat or declining, investigate why — tool degradation, health issues, or skill plateau are common causes.

Compensation decisions: productivity data supports differential compensation decisions. A blacksmith whose productivity is 40% above baseline for the role justifies higher compensation than one at baseline. This is objective rather than political.

Investment justification: if productivity data shows that the blacksmith is producing 0.2 tools/hour and needs 300 hours to meet community demand but only has 200 hours available, the gap (20 tools short) is the productivity shortfall. Options to close it: hire a second smith, invest in better equipment that raises productivity per hour, or accept the shortfall. Each option has a cost; productivity data defines the size of the problem that cost must address.

Progress tracking for improvements: when an improvement initiative is launched (new technique, new tool, better workspace organization), before/after productivity data shows whether it actually worked. Without measurement, “improvements” may produce no actual productivity change.

Avoiding Productivity Measurement Pathologies

Be careful not to let productivity metrics crowd out quality. A potter who produces 30 vessels per month at 20% defect rate is not more productive than one who produces 20 vessels at 2% defect rate — the actual output of usable vessels is similar, but the second potter wastes far less material. Include quality metrics (defect rates, rework rates, adverse outcomes) alongside quantity metrics.

Also watch for measurement gaming: people changing their behavior to improve the metric without improving actual output. This is usually a sign that the metric is poorly designed. If the blacksmith stops making complex but needed tools and shifts to simple ones because they are faster, boosting units-per-hour while the community’s actual tool needs go unmet, the metric has been gamed. Counter with a metric that reflects community needs met, not just units produced.