RolePlays.ai
    Back to Blog

    Why Leaders Know What to Say But Can't Say It: The Rehearsal Gap

    Bernhard KerresMarch 18, 202611 min read
    Why Leaders Know What to Say But Can't Say It: The Rehearsal Gap

    Every evening before I left the office, my assistant would prepare the Daily Folder.

    It was a paper folder with a tab for every meeting. Under each tab: the agenda, the background papers, the financials, the names and bios of the people I'd be sitting across from the next day. I'd take it home, read through every page, scribble notes in the margins, and arrive the next morning feeling thoroughly prepared.

    I did this for years as a C-level executive. And I was prepared for the content. I knew the numbers. I knew the arguments. I had my talking points ready.

    What I wasn't prepared for was the moment when someone pushed back in a way that wasn't rational. Wasn't logical. Didn't follow the script I had in my head.

    And in those moments, I froze. More than once. In conversations that mattered.

    I knew exactly what to say. I just couldn't say it when it counted.

    That experience - repeated across years of executive roles, consulting, and coaching - is why I built RolePlays.ai, an AI-powered platform where leaders practice their most difficult conversations before they have them for real. What I didn't have back then - a place to rehearse the pushback, the silence, the irrational response - is what we now offer to organizations like Henkel, PwC, and Strategy&.

    But this isn't just my intuition. The research now backs it up.


    The illusion of preparedness

    A 2024 study from Stanford University proved that this isn't a personal weakness - it's a structural flaw in how we train people for difficult conversations.

    Researchers Omar Shaikh, Michele Gelfand, and their colleagues built an AI system called "Rehearsal" and tested it against traditional training in a controlled experiment with 40 participants. Both groups received identical training material on conflict resolution - a video lecture and a detailed list of strategies based on the Interests-Rights-Power (IRP) framework from negotiation theory. One group stopped there. The other group also practiced with an AI simulation before facing an actual conflict with a real person.

    Here's the finding that should make every L&D leader pause: both groups scored equally on a knowledge quiz. They knew the same strategies. They could recognize and recall them at the same level. On paper, both groups were equally prepared.

    But when they walked into the room and faced a real conflict - without any training material or assistance - the results were dramatically different. The group that had practiced with the AI simulation reduced their use of competitive strategies - threats, ultimatums, and appeals to authority designed to force the other side into compliance - by 67%. They doubled their use of cooperative strategies. The control group, armed with the same knowledge, couldn't translate what they'd learned into action.

    Jacob Barnes, who trains leaders in pharma and healthcare at Simple Revolution (https://simple-revolution.com), sees this pattern constantly: "I often train leaders in how to transition from being a specialist to being a leader. A lot of the problem is that they continue to prepare as a specialist. Preparing to know all the details etc. But as a leader, you have to set a direction. What you say has to show that you have a vision. That you know where you want things to go. Not that you know the details. You have people who know the details. You can't lead with details."

    The preparation trap runs even deeper than not practicing. Leaders are often preparing the wrong things entirely - mastering the data instead of preparing for the conversation. And then they walk in armed with spreadsheets when what they needed was the ability to set direction, hold tension, and respond to the room.

    That gap between knowing what's right to do and being able to do it under pressure is the rehearsal gap.


    Why reading about it doesn't work

    The Stanford study didn't just measure outcomes. It explained why passive learning fails.

    The researchers grounded their work in a well-established observation from conflict resolution research: when people face pushback, they tend to spiral. If someone uses a power strategy against you - a threat, an ultimatum - your instinct is to respond with power right back. If they assert rights - "company policy says..." - you counter with your own rights claim. This creates what researchers call a conflict spiral: each escalation makes it harder to return to cooperation.

    Knowing this pattern intellectually doesn't prevent you from falling into it. Just as knowing that you shouldn't touch a hot stove doesn't prepare you for the reflex when someone puts pressure on you in a live conversation.

    The simulation group learned through experience. They tried a power response and saw the conversation escalate. They tried rights and hit a wall. They discovered by doing, not reading - that an Interests-based approach (discussing what both parties actually need) was the path to resolution. By the time they faced the real conflict, they had already made these mistakes in a safe environment. The patterns were in their body, not just their brain.

    The study's authors put it precisely: passive training builds skills orthogonal to actual practice. In plain language: reading and watching will teach you one thing. Doing it will teach you something entirely different. And the "doing" is what matters when the pressure is on.


    Why a standard chatbot won't cut it

    Here's where it gets interesting for anyone thinking "I'll just practice with ChatGPT."

    The Stanford researchers found that standard large language models are fundamentally unsuited for practicing difficult conversations. The problem? They're too agreeable. Instruction-following LLMs are trained to be helpful and cooperative - which means they cave too quickly when a user pushes back. They don't maintain realistic conflict. They don't push back the way a real person would. They don't create the pressure that makes practice valuable.

    This is a critical finding. If the AI simulation folds after two turns, you're not practicing anything. You're just having a pleasant chat with a machine that tells you what you want to hear. That's worse than no practice at all - it builds false confidence.

    To solve this, the Stanford team developed a sophisticated multi-step prompting approach they called "IRP Prompting." Instead of letting the LLM generate free-form responses, they first classified each message's conflict strategy, then planned the simulation's next move based on conflict resolution theory, then generated a response conditioned on that strategy. They scored each possible response for how it would affect the trajectory of the conflict. The simulation could only reach agreement after multiple interest-based strategies were used - no shortcuts, no premature capitulation.

    The result was what the researchers called a "Goldilocks zone" - a simulation that was neither too stubborn (making practice futile) nor too agreeable (making practice meaningless). It felt like practicing with a person who had genuine concerns and wouldn't simply be charmed into agreement.

    This is exactly why we built RolePlays.ai the way we did. Our scenarios use proven frameworks - McKinsey, ICF, Kegan's Immunity to Change - as the theoretical backbone that governs how our personas behave. Our personas don't just follow a script. They have built-in conversational turning points, emotional reactions, and strategic shifts. They push back when pushed. They deflect when cornered. They change direction mid-conversation, just as a real stakeholder would.

    Getting this right requires more than a clever prompt. It requires deep scenario design, framework-grounded persona logic, and a system sophisticated enough to maintain realistic tension across an entire conversation. That's the difference between typing into ChatGPT and practicing on a platform built for this purpose (More: https://roleplays.ai/ai-roleplay-executive-education).


    The Conflict Reality Check

    The Stanford study surfaced one more finding that I find deeply honest - and deeply relevant for leadership development.

    After training, participants in the simulation group actually lowered their self-assessed confidence in handling conflict. They came in thinking they were pretty good at it. After practicing, they revised that assessment downward.

    This isn't a failure of the training. It's a feature. The researchers called it the "Conflict Reality Check."

    Passive learning creates an illusion of competence. You watch a video about conflict resolution, you read the strategies, and you think: "That makes sense. I could do that." You feel prepared. You're not.

    Practice shatters that illusion. When you actually try to apply those strategies in a realistic simulation - when the other person doesn't follow your mental script, when they escalate instead of cooperating, when your carefully planned opening gets derailed in the first thirty seconds - you discover how hard this actually is.

    That's uncomfortable. But it's the beginning of real competence. Because now you know what you need to work on. Now the next practice session has purpose. And the one after that. And the one the night before the actual conversation.


    Stay present when the conversation gets real

    This January, I put this to the test in a high-stakes meeting with Jacob Barnes from Simple Revolution. Jacob and I had worked together for years - he'd engaged me to run communication and presentation trainings for his pharma and healthcare clients. I always loved working with Simple Revolution because of their genuine friendliness and professionalism.

    But this meeting was different. I wasn't there to deliver another training. I wanted to shift our relationship from me being one of many trainers to a strategic partnership - bringing RolePlays.ai to Simple Revolution's clients together. That's not a small ask. It meant suggesting a fundamental shift in Jacob's successful business strategy: from offering purely in-person trainings to including an AI-powered tool, in a highly regulated pharma environment where every new technology faces scrutiny.

    This was exactly the kind of conversation where, in my Daily Folder days, I would have been thoroughly prepared on the content - and completely unprepared for the pushback. The logical arguments for partnership were solid. But what if Jacob saw this as a threat to his model? What if the regulatory concerns felt insurmountable? What if the conversation turned emotional?

    So I practiced. I rehearsed the conversation, including the difficult parts. I prepared not just for what I wanted to say, but for what Jacob might say - and how I'd respond when the conversation didn't follow my script.

    After two wonderful days in Vienna, we agreed on a partnership. Jacob started bringing RolePlays.ai to his clients.

    Would we have reached the same outcome without the practice? Maybe. But I'm certain the conversation would have been harder, longer, and less productive. Practice didn't give me better arguments. It gave me the ability to stay present when the conversation got real.


    Closing the rehearsal gap

    The rehearsal gap is the most expensive gap in leadership development. Not because organizations don't invest in training. They do. But because the training stops exactly where the learning should begin.

    We built RolePlays.ai to close that gap. Not as a replacement for workshops or coaching - but as the practice layer that has always been missing. Custom scenarios built on proven frameworks, with personas who push back, deflect, and change direction just like real people do. Available via chat, voice, or video. Not as a one-time exercise, but as a continuous practice tool that's there when leaders actually need it: the night before the board meeting, the morning of the difficult feedback conversation, the week they're preparing for a strategic negotiation.

    Because the question was never whether leaders know what to say.

    The question is whether they can say it when it matters.

    If the rehearsal gap is costing your leaders, let's talk.


    References

    Shaikh, O., Chai, V., Gelfand, M. J., Yang, D., & Bernstein, M. S. (2024). Rehearsal: Simulating conflict to teach conflict resolution. Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). ACM. https://doi.org/10.1145/3613904.3642159

    Brett, J. M., Shapiro, D. L., & Lytle, A. L. (1998). Breaking the bonds of reciprocity in negotiations. Academy of Management Journal, 41(4), 410–424.

    Ury, W. L., & Brett, J. M. (1988). Getting Disputes Resolved: Designing Systems to Cut the Costs of Conflict. Jossey-Bass.