Hit enter to search or ESC to close
Making Training POPULAR: A Model for Meaningful Feedback

Making Training POPULAR: A Model for Meaningful Feedback

By Jason Fletcher, FAITD 

Feedback is essential for effective learning and development. It shows us whether our training is relevant, engaging, and genuinely making a difference. Many feedback and evaluation models are built for internal learning and development teams, making it difficult for external training providers to use them to collect feedback and data from a variety of clients.

 

This challenge inspired me to create the POPULAR model – a practical framework designed to measure the quality and impact of learning products while strengthening relationships with customers and learners.

Why Another Model?

Classic evaluation approaches like the Kirkpatrick Model and Phillips ROI Methodology remain useful, but they assume you have ready access to learners and their workplaces for extended follow-up. For an external training provider working with multiple organisations, that’s often not practical.

In response, I created a framework that is comprehensive and adaptable, capable of capturing feedback at various stages of the learner journey, and relevant to both our learning design team and our clients. 

The POPULAR Model

The POPULAR model combines well-known evaluation principles with a lens tailored for external providers. The acronym stands for:

  1. Pre-course: Assess the learner’s current skills and their pre-training This stage can occur from the initial marketing of a course through to enrolment.
  2. Opinion: Capture immediate reactions to elements such as learning materials, facilitators, venues, or virtual environments.
  3. Participation: Invite learners and facilitators to rate participants’ engagement Reflection on participation is an adult learning principle often overlooked outside of coaching or mentoring contexts.
  4. Understanding: Assess whether learners knew what was expected of them, from the course logistics through to the learning objectives.
  5. Learning: Determine what participants learned, through assessment, self-appraisal or feedback.
  6. Application: Explore whether learners can and do apply new skills and knowledge at This is challenging to measure at course end, so post-training follow-ups and surveys can be used.
  7. Reflection: Encourage learners to consciously recall and integrate their learning into professional

Each element can be assessed through targeted questions. For example:

  • “How easy was it to sign up for your course?” (Pre-course)
  • “List the three most practical things that you learned” (Learning).

Implementation in Practice

One advantage of the POPULAR model is its flexibility. Feedback can be collected before, during and after training, depending on the question. Tools might include surveys, polls, interviews, phone calls, or focus groups, and the resulting data can feed directly into product improvement and marketing.

In my implementation of the model, I focused on:

  • Developing question banks aligned with each POPULAR element
  • Selecting feedback tools suited to the training delivery modes
  • Designing reports that summarise results clearly for stakeholders
  • Piloting and refining the process before full rollout

The model has proven to be successful, with actionable data collected that has informed the continuous improvement of training products, and delighted customers.

Why POPULAR Works

The POPULAR model provides a richer, more nuanced view of training effectiveness than a single end-of-course survey. It captures the whole learner journey, from first contact to post- training reflection, giving providers actionable insights to improve both learning outcomes and the customer experience.

For trainers, it’s an adaptable checklist to ensure no part of the learner experience is overlooked. For clients, it offers transparent evidence of training impact. And for learners, it’s an opportunity to actively participate in shaping their personal learning journey.

Looking Ahead

As implementation continues, my goal is to build a longitudinal dataset that monitors trends, benchmarks across programs, and answers the ultimate question: Are these courses POPULAR?

Further Reading & Resources


About the Author: Jason Fletcher, FAITD

Jason-Fletcher

Jason Fletcher is an AITD Fellow and is Education Manager at Engineering Education Australia, where he leads the design, development and evaluation of award-winning learning products for engineers and related professionals. Contact: https://www.linkedin.com/in/1jasonfletcher/