top of page

Why is no one getting excited about supporting HI in a world of AI?


AI isn’t just a technology shift; it’s a meaning shift. Roles, routines, and status markers move under people’s feet. In that kind of turbulence, technical training alone rarely lands. What does land is a psychologically informed process that helps people metabolise uncertainty, learn visibly, and act with agency. That, in essence, is where coaching psychology earns its keep.


In practice, coaching psychology blends evidence on goal-focused self-regulation with relational conditions that support learning and behaviour change (think psychological safety, need-supportive climates, and motivational interviewing micro-skills). The aim isn’t to “sell AI,” but to help people appraise it, align it with valued goals, and transition into either redesigned roles or dignified exits, without losing their sense of competence, autonomy, or relatedness. Passmore and Lai (2019) frame this as coaching psychology’s distinctive contribution: evidence-based methods, not just good intentions (SingJupiter Post, 2025)


Two populations, one ethical stance


Organisations typically split into “stayers” who adapt in their roles and “leavers” who transition out. Both deserve the same ethical container: candour, informed choice, and support that respects human needs. The International Labour Organisation’s latest analysis suggests generative AI will transform tasks across many occupations (rather than delete whole jobs), with uneven gender and sectoral effects, especially in administrative and clerical work disproportionately done by women. This isn’t abstract: it shapes who needs what kind of help, and when. (Mint, 2025).


At the same time, note the warning from Geoffrey Hinton (known as the godfather of AI): in his interview on The Diary of a CEO, he said that for “mundane intellectual labour” (his words, not mine) roles such as paralegals, call-centres, routine office work, AI is “just going to replace everybody.” He went further to argue that one person working with an AI assistant may do the work of ten people previously. He also specifically challenged the widespread belief that new jobs will simply replace all lost ones: “You’d have to be very skilled to have a job that [AI] just couldn’t do.” 


What this means is: the threat isn’t just to “routine low-skilled” work. It includes “mundane intellectual” work, a massive slice of many white-collar job families. That shifts the urgency of adaptation. Coaching psychology becomes not optional but ethically essential.


For those staying: build capability, not compliance


Three strands matter.


  1. Psychological safety as the social “API.” If people fear ridicule for naive questions, you get silent error logs instead of learning signals. Edmondson’s work shows that teams learn faster when members believe they can speak up without interpersonal risk. Leaders can model uncertainty, invite dissent, and run lightweight “learning sprints” (e.g., weekly demos of AI use-cases with explicit reflection on what failed). Treat this like infrastructure. Without it, your AI programme becomes a performance of adoption rather than a reality.

  2. Motivation that travels. Self-Determination Theory (SDT) gives a simple test for your AI rollout: does it support autonomy (choice, rationale, options), competence (scaffolded practice, feedback), and relatedness (peer-help, shared language)? When those needs are met, intrinsic motivation and persistence tend to improve, which is critical when tools evolve monthly. Design your enablement like this: optional pathways for power-users (autonomy), progressive challenges with visible skill ladders (competence), and “buddying” or communities of practice (relatedness).

  3. Behaviour change mechanics. Two frameworks help translate intent into action.

    1. Goal-focused coaching operationalises self-regulation: define valued outcomes, plan, act, monitor, recalibrate. It’s mundane, but it works, especially when the coach keeps attention on concrete, near-term behaviours (e.g., “use an LLM to draft the first pass of the weekly client summary for three weeks; compare quality and time saved”). 

    2. ACT’s psychological flexibility equips people to notice anxiety without being governed by it, to return to their values, and to take the next workable step. Pair a values cue (“Why does this matter to your clients?”) with a tiny, time-boxed experiment. Over time, the “I don’t do AI” story gives way to “I test and decide.”.


Add a technology lens: the classic Technology Acceptance Model says perceived usefulness and ease of use drive adoption. Coaching can directly surface and re-work those appraisals through micro-experiments (“let’s time your pre-AI and post-AI workflow this week and look at the evidence together”).


Two very practical moves round this out: job crafting (small, bottom-up tweaks to tasks, relationships, and meaning) lets people reshape roles around comparative advantage with AI, e.g., offloading routine drafting to a model while doubling down on stakeholder sense-making. And motivational interviewing (MI) helps when ambivalence is sticky: we lean into it, elicit change talk, and respect the person’s right not to change. Both increase movement without pressure.


For those exiting: transitions with dignity and momentum


Exits are where values become visible. Two design principles:


  1. Name the ending, navigate the neutral zone, author the beginning. Bridges’ transition model sounds soft until you try to skip it. People don’t adopt a “new beginning” just because a comms plan says so. Coaching can help them close the old story (losses acknowledged), stabilise in the middle (routines, peer-groups, scaffolded), and author credible next steps. In practice: a three-session arc — Story of Work (past), Pause and Pattern (present), Test and Tell (future) — maps well. 

  2. Preserve agency and evidence. The same SDT lens applies: offer real choices in outplacement (autonomy), skill-audits plus targeted AI-literate training (competence), and alumni networks with signal, not spam (relatedness). We also bring MI to the fore: explore discrepancy (“What future would feel more like you?”), amplify confidence (“Where have you already adapted faster than you expected?”), and agree on the next small experiment (portfolio site live; first AI-aided case-study drafted).


A word on fairness: macro bodies like the ILO and IMF warn about unequal distributional effects from AI. Coaching cannot fix policy, but it can prevent organisational harms: ambiguous criteria for who stays, opaque redeployment processes, and making people “train their replacement” without tangible reciprocity. Put differently, coaching is necessary but not sufficient; it belongs alongside transparent selection rules, retraining budgets, and clear income bridges. 


A simple architecture you can run this quarter


  • Phase 1 – Sense-making (Weeks 1-2): Executive briefings that model uncertainty; Team workshops to surface hopes/concerns and define success metrics. Psychological safety priming starts here (leaders ask for red-team critiques of AI pilots).

  • Phase 2 – Skills & Experiments (Weeks 3-8): Role-specific micro-experiments (30-60 mins/week) with coaching check-ins. Each person defines one AI-enabled behaviour linked to a tangible deliverable, then runs a test-measure-reflect loop. Layer in goal-focused coaching and ACT skills for stuck points.

  • Phase 3 – Role Crafting & Decisions (Weeks 6-10): Job-crafting clinics turn pilot wins into redesigned task-portfolios; those with shrinking role-fit get immediate access to a coached transition track (Bridges arc + MI + practical placement support).

  • Phase 4 – Institutionalise (Weeks 10-12): Communities of practice; publish “AI ways of working” playbooks written by practitioners; reward learning behaviours (not only outcomes). Keep the safety signals on.


What this looks like on the ground


A mid-level operations team starts with a shared problem: weekly client updates take six hours. After a safety check-in, each member commits to a small trial: use a vetted model to draft a first pass, then human-edit. Two weeks later, the team presents timing data and quality reviews. Perceived usefulness increases as it becomes visible; two members tailor their roles to specialise in prompt libraries and client-specific tone tuning; one selects the transition track after finding a better fit in vendor-side enablement. That mix—adoption, crafting, dignified exit —is a success because it’s self-determined, not enforced. 


Closing thought


AI is accelerating the half-life of certainty. Coaching psychology offers a humane, evidence-based way to keep people moving without burning trust. It treats adoption as a learning journey, with metrics and exits as developmental moments rather than reputational risks. In other words: fewer myths, more experiments; fewer slogans, more conversations that change what people do tomorrow morning.

 
 
 

Comments


Correspondence Address: Mind Works, 124 City Road, London EC1V 2NX

Copyright © 2025 Mind Works Psychology - All Rights Reserved
bottom of page