Hit enter to search or ESC to close
Trust: The Missing Capability in Data-Driven Learning Design

Trust: The Missing Capability in Data-Driven Learning Design

By Md Nazrul Islam

Learning design is becoming more data-driven by the day.

AI-powered learning platforms track engagement, predict performance, personalise pathways, and flag “at-risk” learners before a manager ever notices a problem.

Yet many of these systems quietly fail. Not because the technology is flawed, but because learners do not trust it.

They complete courses but disengage mentally. They follow pathways they do not understand. They protect themselves rather than learn honestly. This is not a technology failure. It is a trust failure.

In Australia, where workplaces emphasise fairness, transparency, and psychological safety, trust is not a “nice to have.” It is a functional requirement. When learners do not trust how data is collected, interpreted, or used, learning outcomes collapse regardless of how sophisticated the platform is.

Trust, therefore, is no longer a soft skill in learning design. It is a core capability.

Why Trust Determines Whether Data-Driven Learning Works

Research consistently shows that learners disengage when they feel monitored rather than supported. In Australian universities, healthcare training, and corporate L&D environments, this shows up in familiar ways:

  • Learners complete modules but avoid adaptive recommendations

  • Employees “game” learning analytics by clicking through content

  • Managers ignore dashboards because they distrust the data

  • AI recommendations are overridden because no one understands them

Data exists. Insight does not.  To understand why, we must ground learning design in established trust frameworks, not intuition.

The Mayer, Davis & Schoorman Trust Model (1995): Applied to L&D

The Mayer, Davis and Schoorman trust model, widely applied in Australian organisational development, identifies three conditions that determine trust:

  • Ability: confidence that the system is competent

  • Benevolence: belief that the system acts in the learner’s interest

  • Integrity: confidence that the system is ethical, fair and transparent

When applied to learning systems, the implications are clear: If learners do not believe the system understands their work (ability), if they suspect learning data will be used against them (benevolence), or if they are unsure how decisions are made (integrity), learning stops being developmental and becomes defensive compliance.

No dashboard will reveal this. But learner behaviour will. Trust emerges when learners believe the system is competent, well-intentioned, and principled. Let’s apply this precisely to learning design.

1. Ability: Does the learning system actually help me learn?

Ability refers to perceived competence. In L&D, this means learners must believe that:

  • The data is accurate

  • The recommendations are relevant

  • The system understands their role, not just their clicks

In vocational education, healthcare training, and graduate programs, learners are highly outcome-oriented. If AI-driven learning paths recommend irrelevant modules or misjudge capability, trust erodes fast.

Specific L&D actions that build ability-based trust:

  • Use role-specific data models, not generic engagement metrics

  • Validate AI recommendations against human instructional designers before rollout

  • Show learners why a module was recommended (based on the clinical assessment score, not just completion time)

  • Regularly audit learning analytics against actual performance outcomes (not vanity metrics)

When learners trust the system’s competence, they follow recommendations, engage with feedback, and invest effort, leading to measurable skill transfer.

2. Benevolence: Is this learning system on my side?

Benevolence is about perceived intent.  Learners often think: “Is this data being used to help me or judge me?”

This is critical in Australian workplaces where learning is often linked (rightly or wrongly) to performance management. And the common trust failure occurs, learning data quietly feeds into HR decisions without learner awareness.

Specific L&D actions that build benevolence-based trust:

  • Clearly separate learning analytics from performance management systems

  • Provide learner-controlled visibility settings (what managers can and cannot see)

  • Frame analytics as developmental signals, not deficit markers

  • Use dashboards that prioritise growth indicators (skill progression, mastery) over compliance metrics

When learners feel psychologically safe, they take learning risks - attempting harder modules, reflecting honestly, and engaging with feedback instead of avoiding it.

3. Integrity: Are the rules clear and consistently followed?

Integrity refers to adherence to principles.  In learning systems, integrity collapses when:

  • Data use policies change without explanation

  • AI decisions are opaque

  • Exceptions are made quietly for some groups

The Australian Government’s AI Ethics Principles emphasise fairness, accountability, and explainability - all directly relevant to L&D environments using learner data.

Specific L&D actions that build integrity-based trust:

  • Publish a plain-English learner data charter

  • Explain how algorithms make decisions (at a conceptual level)

  • Apply policies consistently across cohorts
  • Allow learns to challenge or query automated outcomes

Integrity builds predictability. Predictability builds confidence. Confident learners engage more deeply and persist longer.

Transparency is Not a Trust Pillar, it is a Trust Accelerator

Transparency is often confused as a trust component itself. It is not.  Transparency amplifies Ability, Benevolence, and Integrity.

In learning design, transparency means:

  • Learners understand what data is collected

  • They know how it is interpreted

  • They know how it will (and will not) be used

Educational institutes that clearly explain learning analytics usage report higher student engagement with dashboards than those that simply deploy tools without explanation.

Actionable transparency practices for L&D professionals:

  • Pre-course data briefings

  • “How this recommendation was generated” tooltips

  • Plain-language FAQs embedded inside the LMS

Transparency converts suspicion into participation.

Psychological Safety: The Hidden Multiplier in Learning Analytics

Amy Edmondson’s Psychological Safety framework applies directly to learning environments, especially data-rich ones. When learners fear negative consequences, data accuracy collapses. They rush modules, they hide confusion and then, they disengage.

L&D actions that build psychological safety:

  • Explicitly state that learning data will not be used punitively

  • Reward learning effort, not just outcomes

  • Use anonymised cohort-level analytics for benchmarking

Safe learners produce honest data. Honest data produces better insights. Better insights improve learning design.

Measuring Trust Impact: Kirkpatrick (reframed)

Trust should be evaluated through Kirkpatrick’s four levels but with a data-aware lens.

  • Level 1 (Reaction): Do learners feel the system is fair and useful?

  • Level 2 (Learning): Are personalised pathways improving mastery?

  • Level 3 (Behaviour): Are skills applied on the job?

  • Level 4 (Results): Are performance outcomes improving sustainably?

If trust is missing, learning rarely progresses beyond Level 1.

The Reality Check for L&D Professionals

Data does not create trust. AI does not create trust. Dashboards do not create trust.

Design decisions do!

In Australian learning environments where equity, accountability, and human-centred design matter, trust must be engineered deliberately.

Not as an afterthought. Not as a communication exercise. But as a core learning design capability. Because without trust, data-driven learning does not fail loudly. It fails quietly.  And by the time outcomes are measured, learners are already gone!

Interested in AI and eLearning?

AITD offers several courses on both AI and eLearning:

AI Essentials for L&D Professionals: Do you want to transform your L&D workflow and enhance learner experiences using Generative AI? In this blended learning course, you’ll learn how to use Generative (Gen) AI for a range of key L&D functions, including skills gap analysis, content creation, personalised learning and feedback. Register now.

eLearning: Foundations: This is Part I of an engaging, social learning suite of courses that provides you with access to learning experiences, activities and a comprehensive knowledge base. Register now. You may also be interested in eLearning: Planning and Design; and eLearning: Production and Delivery.


About the Author: Md Nazrul Islam 

Md-Nazrul-Islam

Md Nazrul Islam is a Fellow Member of the Australian Institute of Training and Development (AITD) with over a decade of experience in learning and development across VET and higher education sectors. He leads employability and career-focused programs, designing learner-cantered solutions that align with AQF and ASQA standards. As founder of OZGRADS, he empowers university students to translate learning into tangible career outcomes. For further information, visit: www.ozgrads.com.au