Designated AI coaching apps and AI coaching with AI chats started appearing with the advancement of AI applications. Both private individuals and organizations at scale make use of them.

As I am a coach by training, I am, of course, biased, but I am also educated on the risks of AI coaching. I urge you to be aware of the risks and disadvantages of AI coaching before you try it.

To summarize, AI coaching is efficient over effective, in the best case. It’s comfort over impact, for both individuals and organizations.

AI Coaching Benefits

AI coaching has some clear benefits:

  • Accessibility – it’s always there, ready to talk to us, paid or free
  • No need to seek coaches and have a chemistry call
  • Whether paid or free, it’s cheaper than a human coach
  • It is trained on huge amounts of data that it can apply instantly
  • For people who prefer less direct human connection, this may encourage engaging with coaching
  • Quickly detecting patterns and emotions expressed in words
  • AI may be less biased, impacted, or triggered by the coachee’s appearance, body language, style, or words
  • AI communication style can be preconfigured in some cases to match the coachee’s preferences
ai coaching vs human coach

Dark patterns
Remember all these semi-legal tactics to get you to do things you don’t want to on websites? Like highlighting the button to register instead of skipping, or making the cancellation of a subscription almost impossible?
They made their way into AI already, sometimes very subtly. For example, asking personal questions to collect data, or offering more pathways to the conversation to keep the engagement, without real value to the user.

Data use
A human coach clearly communicates the use of data processing, storage and AI use in the contract in plain language. The privacy and data use of the AI providers may be more vague and less understandable.

Anonymity and minimization
Coaches use their brains first and AI second, for reflection on their work, and for possibly missed patterns or signals from anonymized and minimized notes. Having AI fed with raw data with PII (names, companies, roles, locations) is a risk as AI providers process and use it in various ways, not always clear. The risk is either for private users or for organizations. The reports AI coaching yields might breach anomymity.

Ethics
Coach uses AI while strictly adhering to the ICF or EMCC ethical guidelines, which, for example, forbid using AI to diagnose mental state or breach any other coaching boundaries into therapy or advice giving. AI is general and if not set up explicitly, is not aware of these. A human coach is obligated to bring this up and refer to other experts.

Confidentiality
AI in organizations, e.g., automatic note taker or AI coaching apps, may reduce users’ trust due to fear of breaching confidentiality and anonymity, rendering the process superficial and ineffective. A core part of the human coach’s work is to create a safe and trustful space.

Beyond words – somatic intelligence
At the time of writing this, AI cannot comprehend body language, breathing changes, intonation, silence, tears, shaking, or somatic shifts. It therefore acts based on partial information. The 55/38/7 rule teaches us that this information is considered to be less than 10% of the communication. While the specific percentages of the Mehrabian principle are often debated, the core truth remains: AI ignores the vast majority of non-verbal data that a systemic coach uses to sense the field.

Consider the following trivial example, where emphasizing a different word gives a different meaning to the sentence:

  • I didn’t say that
  • I didn’t say that
  • I didn’t say that
  • I didn’t say that

Confirmation bias
AI is pleasing by default. It would encourage your ideas and thoughts unless specifically set up not to. It usually requires deliberate, explicit questions from your side, like “where might I be wrong?”, “what am I missing here?” or “present counter-arguments.”

A human coach remains neutral and thoughtfully challenges the coachee’s words, not because they disagree, but to go deeper, find axioms, needs or values. The coach’s worldview is never imposed on the coachee.

Coaching is often about disrupting the client’s current system to allow for a new perspective to emerge. AI, by being agreeable, reinforces the coachee’s existing, and perhaps limiting, internal system rather than challenging it.

Western-centric
AI is trained on ample literature and texts, mostly online. As the majority is in English and specifically US, there are cultural nuances, beliefs, metaphors and distinctions that might feel foreign to people from different backgrounds. An example is the assumption that eye contact is welcome and encouraged, but for several Aboriginal cultures, avoiding direct or prolonged eye contact is a sign of respect.
AI lacks contextual intelligence. It doesn’t know the coachee’s organizational culture, their family loyalty patterns, or the systemic “ghosts” in the room. Even the physical coachee’s room, or the place they chose to call from when it’s online, may carry some relevant information.

AI is generic by nature. It serves everyone from the age of 4 (if not earlier) to over 99, in all cultures and professions. Therefore, AI coaching might miss contextual aspects that a human would be aware of or would know to ask.

Time and space
A coaching session is a planned activity on the calendar. It has its full time to slow down and deliberately think on things that we don’t usually do in the day-to-day.

The coach is another human being with a nervous system, which allows them to hold space when emotions arise, meaning, really be with the coachee and allow them to express any emotion or sensation in a safe space. The coach can help regulate and perhaps explore the feelings.

AI is unlikely to have defined time, and it can certainly not hold space, regulate or really see the coachee’s experience.

Holding accountable
When a coachee makes a commitment (which is always coming from them, never contributed by the coach), they are likely to keep it, as the coach is an accountability partner. The same for answering the coach’s questions, being invited to think and answer, sometimes through uncomfortable silence. AI is easy to dismiss with slight discomfort.

Coaching is not consulting
As I explained here, consulting is giving answers, ideas, suggestions, or advice. Coaching is reflective by nature and assumes the coachee is the expert of their life and will not cross this boundary by providing any of these. The coach’s responsibility is mainly presence, structure, tools and remaining not knowing and neutral. The coachee is responsible for the agenda, decisions, actions and commitments.
The client may not be aware of the distinction and use AI beyond the reflective process. AI may not keep the boundary and might provide content and advice. It might also take the liberty to diagnose health and mental conditions that are outside of coaching.

Agency – outsourcing thinking
We’ve been outsourcing much of our thinking in the past years – algorithms curate our feed, potential romantic partners and navigating roads. Now comes the danger that the general thinking is outsourced to AI to do it for us. It tends to feel authoritative and confident, which might create a dependency and reduce our abilities. AI will not take responsibility for the outcome, though.

The habit of outsourcing thinking robs the coachee of their agency. Coaches are deliberately trained on returning the agency to the client, for example, with simple reflective questions such as “what do you think you need to do?”

On top, there is a high chance of endless optimization loops, or analysis paralysis, trying to find the perfect path forward over and over.

Additionally, people may seek the AI’s validation prior to taking action, further deepening the addiction. A coach will not judge or validate the client’s thoughts, goals, decisions or actions.

Progress tracking
A human coach can track progress toward goals and impact from several sources that AI may not have access to, such as observing behaviors, getting feedback from stakeholders or the environment, and available metrics. The coach also strictly holds confidentiality in the process. AI might need to be exposed to resources that violate confidentiality or anonymity.

Human connection
We’re a social animal. Many of us need a real human connection, and specifically in the coaching context, a person whose whole attention is devoted to the coachee and who is fully listening to them on all four levels. For the people who need this connection (which is not everyone), AI coaching is no replacement.

Supervision
Human coach turns to supervision, that is consulting an expert coach, in case of dilemma about tools, making meaning, ethical and emotional resonance. This is an ethical obligation the coach is accountable for. AI has access to the world’s knowledge, but it simply cannot do some things and might cross coaching boundaries, thus coming up with wrong conclusions or consulting, even with a 100% guarantee it’s not hallucinating. AI is accountable to its shareholders. In case you come acorss AI coaching questions and dilemmas, who do you turn to?

You saw how AI coaching can be efficient, comfortable and in the “doing”, compared to a human coach that is effective, challenging and both in the “doing” and “being”.

Ultimately, coaching is not a transfer of information, but a transformational encounter between two nervous systems. While AI coaching can simulate the language of growth, it cannot provide the witnessing required for true human evolution.

As we navigate this digital revolution, we must ask whether we are seeking a mirror that reflects what we want to see, or a partner who dares to see who we might become. AI coaching can mimic the script of coaching, but only a human can hold the silence where transformation truly lives.

Checklist for Organizations

Organizations often prioritize accessibility (coaching for all), which is a noble systemic goal. However, if the coaching is shallow, you risk creating a culture of surface-level compliance rather than deep-level transformation.

1. Ethics

[ ] Data sovereignty
Does the provider clearly state who owns the “reflections”? Are they used to train global models, or are they siloed for your organization?

[ ] Anonymity & confidentiality
Can the organization guarantee that “aggregated themes” provided by the AI won’t inadvertently identify individuals in smaller teams?

[ ] Clinical triaging
Does the AI have a “hard stop” protocol for mental health crises or trauma that triggers a referral to a human professional?

2. Coaching boundaries

[ ] Directive vs. reflective
Is the AI set up to ask powerful questions, or is it defaulting to consulting or mentoring mode by offering tips?

[ ] Boundary awareness
Can the AI distinguish between a performance goal and a therapeutic need?

[ ] Validation trap
Does the AI coaching have a “challenge mode” to prevent the reinforcement of the employee’s existing biases?

3. Contextual alignment

[ ] Cross-cultural nuance
Is the model trained on diverse global datasets, or is it Western/US-centric? Does it fit us?

[ ] Systemic literacy
Can the AI recognize organizational dynamics, e.g., power structures, silos, or triangulation (discrepancy between sponsor and coachees), or does it treat every problem as purely individual?

4. Impact

[ ] Metrics
Are you measuring success by engagement and cost (efficiency) or by behavioral change (effectiveness) and ROI?

[ ] Dependency risk
Is the AI coaching building the employee’s autonomy, or is it creating a “digital crutch” where the employee can no longer solve problems without a prompt?

5. Integration

[ ] Human-in-the-loop
Is there a path for an employee to transition from AI coaching to a human coach for high-stakes professional shifts?

[ ] Supervision & accountability
Who is the “ethical lead” for the AI implementation? Who is accountable if the AI provides harmful or biased career advice?

Subscribe to my newsletter

Leave a Reply

×