By Dave Howden
The dominant framing of workforce challenges in New Zealand and Australia is skills shortage. This framing is incomplete. It directs attention toward increasing enrolments when the real constraint is how many learners complete, and how efficiently providers convert enrolments into qualified, workforce-ready people.
What we actually have is a training throughput shortage.
Training Throughput = (Educator Capacity − Admin Drag) × Completion Reliability
This equation captures the operational reality of vocational delivery. Educator capacity is finite and expensive. Administrative drag consumes that capacity without improving learner outcomes. And completion reliability determines whether enrolled learners actually finish and enter the workforce.
When throughput is constrained, adding more enrolments creates pressure without producing outcomes. Providers become overloaded. Quality suffers. Educators burn out. Industry trust erodes.
The New Zealand Tertiary Education Strategy 2025–2030 explicitly prioritises completion rates, employer relevance, and provider efficiency. It introduces concepts like “distance travelled” and calls for stronger alignment between tertiary education and workforce outcomes.
This policy direction is welcome. But policy alone cannot redesign the workflows that create throughput constraints. That work happens inside providers, and it requires a clear understanding of where capacity leaks and what can be fixed. This briefing provides that understanding.
Throughput leaks occur when system design forces educators to spend time on activities that do not directly improve learner competence or completion. Three categories account for most leakage.
Administrative drag is everything that pulls educators away from teaching, assessment, and learner support. In many providers, this consumes 30–50% of an educator’s working week.
Common sources of drag:
Duplicate data entry across enrolment, LMS, assessment, and reporting systems
Post-hoc evidence capture, documenting competence after the fact rather than as delivery happens
Moderation rework; assessment decisions revisited due to unclear rubrics or inconsistent standards
Compliance documentation that no one reads but everyone must complete
Reporting for reporting’s sake; metrics that serve audit but not improvement
Drag accumulates invisibly. No single task seems unreasonable. But in aggregate, these tasks crowd out the work that actually produces completions.
Completion reliability is the proportion of enrolled learners who actually finish their qualification.
When reliability is low, every hour of educator effort yields fewer workforce-ready graduates.
What erodes completion reliability:
Slow onboarding: learners disengage before delivery properly begins
Unclear expectations: learners don’t understand what “competent” looks like
Delayed feedback: assessment bottlenecks slow progression
Evidence burden on learners: workplace learners become administrative clerks
Life circumstances: work, family, and financial pressures (provider influence is limited but not zero)
Educator capacity is the fundamental input. Qualified vocational educators are scarce, expensive to recruit, and difficult to retain. Most providers cannot simply hire their way to higher throughput.
This makes protecting existing capacity essential. When drag increases, capacity effectively shrinks — even if headcount remains stable. And when educators burn out and leave, the cost of replacement far exceeds the cost of prevention.
Three patterns recur across vocational providers in New Zealand and Australia. Each appears to serve quality or compliance. Each, in practice, creates rework and drag.
Moderation is intended to ensure consistency and quality in assessment decisions. In principle, it builds trust in credentials. In practice, it often becomes a rework engine.
When rubrics are vague, different assessors interpret standards differently. Moderation then catches these inconsistencies, but only after the assessment has been completed. The result is rework: revisiting decisions, gathering additional evidence, re-marking submissions.
The fix is not more moderation. It is clearer rubrics and better upfront calibration. Quality improves when assessors agree before they assess, not after.
Evidence of competence is essential for assurance. But in many providers, evidence capture is designed as a separate administrative task, bolted on after delivery rather than embedded within it.
Educators teach, then document. Learners demonstrate competence, then chase paperwork.
Workplace supervisors observe performance, then fill out forms days later.
This creates drag for everyone and produces lower-quality evidence. The best evidence is captured in the moment, as a byproduct of delivery, not reconstructed afterwards.
Compliance exists to build assurance — confidence that providers are delivering quality education and that credentials mean something. But compliance activities often drift from this purpose.
Providers complete documentation because it is required, not because it improves outcomes. Auditors sample evidence that was created for sampling, not evidence that reflects actual practice. The ritual of compliance substitutes for the substance of assurance.
When compliance becomes performative, it consumes capacity without building trust. The goal should be assurance-by-design: systems where evidence of quality is generated automatically, as a byproduct of good delivery.
The following questions help leadership teams assess throughput constraints in their own organisations. These are not audit questions. They are operational questions, designed to surface where capacity leaks and what might be fixable.
What percentage of educator time goes to post-hoc documentation rather than delivery or assessment?
How often does moderation trigger rework rather than confirmation?
How many systems does a single learner record touch between enrolment and completion?
What is the average time from evidence generation to assessment decision?
What proportion of compliance documentation is ever read by anyone other than auditors?
If we increased enrolments by 20% tomorrow, what would break first?
Where do our most experienced educators spend their non-teaching time?
What would need to change for completion rates to improve by 10% without additional headcount?
These questions are starting points. The answers will be organisation-specific. But the patterns they reveal tend to be consistent: capacity leaking to activities that don’t improve outcomes, and systems designed for compliance rather than delivery.
Improving throughput does not require additional funding. It requires redesigning workflows so that quality and efficiency reinforce each other.
Evidence of competence is captured as a natural byproduct of delivery, not as a separate administrative task. When a learner demonstrates a skill, the evidence is generated in that moment, through observation records, work samples, or system logs. Documentation happens alongside delivery, not after it.
Assessment decisions (AI derived or otherwise) follow clear, consistent pathways. Rubrics are unambiguous. Calibration happens before assessment, not during moderation. When edge cases arise, escalation routes are defined. The system produces consistent decisions without requiring constant human intervention to resolve ambiguity.
Humans remain central to competency decisions. But they are supported by systems that reduce cognitive load, surface relevant evidence, and flag exceptions. Technology handles the routine; humans handle the judgement. This is not about replacing educators. It is about protecting their time for work that requires expertise.
Three immediate actions for leadership teams:
Audit where educator hours actually go. Not workload allocation — actual time. Identify the tasks that consume capacity without improving outcomes. This is the foundation for any improvement effort.
Map the evidence journey. Follow a single piece of evidence from generation to assessment decision. Count the handoffs, the delays, and the rework loops. This reveals where the system creates friction.
Ask the completion question. What would need to change for completion rates to improve by 10% without additional headcount? This reframes the conversation from “more resources” to “better systems.”
The throughput constraint is not a funding problem.
It is a design problem. And design problems can be solved.
Try AITD's course on Needs Analysis. Are training programs always the best solution to an organisational problem? Analysing needs and then implementing practical evaluations are the bookends around the instructional design process. When implemented properly, they have the potential to shape processes, improve the quality of learning transfer methods and measure actual performance results. If you’re ready to become a performance consultant and trusted advisor, then this course is for you. Register now.
Dave Howden is CEO and co-founder of SupaHuman, an AI platform company focused on vocational education across New Zealand and Australia. Dave is working closely with PTEs, polytechnics, and RTOs to understand the operational realities of training delivery.
SupaHuman works with vocational education providers to reduce administrative friction and improve learner outcomes. Our focus is on the belief that vocational education is too important to be buried in admin.