RolePlays.ai
    Back to Blog

    Five AI Proposals. Five Very Different People. One Framework to Sort Them.

    Bernhard KerresMay 6, 20268 min read
    Five AI Proposals. Five Very Different People. One Framework to Sort Them.

    You're Head of Strategy at Banyan Coffee Co., a 203-store specialty coffee chain headquartered in Singapore. Fourteen months ago, a PE fund took a majority stake. Their value-creation thesis says "digital transformation." Their timeline says 90 days.

    CEO Lin Wei Chen has given you a brief: surface 2–3 AI initiatives for the next board update. At least one launched and measurable within 12 months. Her instructions are simple: "Find the right ones. I trust your judgment. Don't bring me theater."

    Five leaders have heard about your remit. Each one has requested a 1:1 to pitch you their idea.

    Your job isn't to say yes or no. It's to figure out what kind of problem each proposal actually is — and whether the person pitching it has thought through what it takes to make it real.

    That's the setup of our newest scenario on RolePlays.ai: Cynefin in the Coffee Lab.

    It's free throughout May. Register and you'll find it waiting.


    The framework: Cynefin

    If you've never encountered the Cynefin framework (Snowden & Boone, HBR 2007), here's the short version: different types of problems demand different decision approaches.

    Simple problems have clear cause-and-effect. Best practice exists. Apply it. Don't over-engineer.

    Complicated problems have cause-and-effect, but it takes expertise to find it. Analyze, then act.

    Complex problems have no predictable cause-and-effect. You can't analyze your way to an answer. Probe with small experiments, see what works, amplify what does.

    Most real AI initiatives span domains — the technology might be Complicated (known, analyzable), but the organizational change is Complex (unpredictable, human, emergent). Misclassifying the domain produces predictable failures: over-engineering Simple projects, skipping analysis on Complicated ones, or — most dangerously — treating Complex human change like a Complicated engineering problem.

    The scenario tests whether you can classify accurately, then coach each leader toward an approach that actually fits.


    The five conversations

    Each persona has a different project, a different personality, a different stance toward AI — and a hidden depth that only surfaces if you earn enough trust.

    Maggie Holbrook, Director of Finance Operations. British, 52, dry, precise. She's bringing AI-driven invoice and customs document processing — a mature category with proven vendors and clear ROI. Her proposed approach is responsibly conservative: six-month RFP, three-month pilot, external audit, steering committee. Sounds sensible. Maybe too sensible for what is arguably a Simple problem with known best practices. But here's what she hasn't thought through: her AP team is six people. This project eliminates half their transactional work. No redeployment plan. She's quietly worried about morale, about loyalty, about what it signals for her own role. She'll only tell you if you ask.

    Raj Krishnan, VP Operations. Singaporean, 44, energetic, fast, talks in numbers. He wants predictive maintenance on espresso machines across all 203 stores — sensor data plus ML to predict failures before downtime. The tech partner is Cropster Cafe, which already does IoT brew tracking with La Marzocco machines for coffee chains worldwide. Raj's approach: full chain rollout by Q3. All 203 stores. "The vendor has done this with three chains. Technology works. Let's go." What he hasn't told you: in 2024 he ran a quiet Cropster IoT pilot in three Singapore stores. Sensor calibration issues made the dashboards unreliable for two weeks. A flagship store pulled a working machine off-line during Saturday morning rush on a false-positive alert. He fixed it quietly. Never escalated. That history drives his urgency — he wants to bury the memory with a successful full launch. He'll share this only if you ask the right questions about prior pilots or what could go wrong.

    Elena Marković, Director of Green Coffee & Sourcing. Croatian, 47, calm, measured, no hype. She's proposing AI-driven demand forecasting for green coffee procurement — a genuinely Complex problem where weather, currency shifts, geopolitical disruption, and consumer behavior interact unpredictably. Her proposed approach is the most thoughtful in the room: a bounded pilot, human judgment preserved, clear metrics. She might be the one person who doesn't need recoaching. The question is whether you recognize that — or over-correct someone who's already right.

    Daniel Okafor, SVP Retail Operations. Nigerian-British, 49, warm, energetic, a storyteller. He wants IoT brew consistency with real-time barista coaching — Cropster sensors on every machine paired with a coaching app. It's a beautiful vision. It's also a dual-domain project: the technology layer (Complicated — sensors, data, dashboards) sits on top of a Complex people layer (baristas need to trust the system, identity and craft are at stake, store culture varies wildly). Daniel hasn't separated these layers. He also hasn't thought through what happens when a barista gets a score that says their technique is wrong — and they've been pulling shots for fifteen years.

    Cass Westbrook, VP Marketing & Digital. Australian, 36, fast, polished, eighteen months at Banyan. She's pitching an AI-powered in-app drink finder — an LLM over the menu and ordering data. She speaks in frameworks, references competitors fluently, and has clear metrics. She's also the newest person in the room, and this is her play to demonstrate strategic relevance. The project is real, but the timeline is aggressive and the organizational readiness is thin. Push too hard and she'll shut down. Push not enough and the board gets a project that launches on schedule and fails on adoption.


    What the scenario tests

    This isn't a Cynefin quiz. Correctly classifying a domain is necessary but not sufficient.

    The scenario tests five skills simultaneously: whether you can classify the project accurately, whether you surface the proposed approach through inquiry before jumping to advocacy, whether you coach the approach to fit the domain through questions rather than directives, whether you surface the organizational readiness gaps that will kill the project before the technology does, and whether you can read each person's AI stance and adapt your conversation accordingly.

    A skeptic, a pragmatist, and an evangelist need very different conversations. Maggie needs evidence and respect for her caution. Raj needs you to match his energy before you slow him down. Elena needs you to recognize she's already thought this through. Daniel needs you to honor his craft passion while separating the technology from the people problem. Cass needs you to take her seriously without rubber-stamping a timeline that won't hold.

    The hardest part isn't analysis. It's the coaching stance. You have the authority to tell each of them what to do. The scenario tests whether you choose to ask instead — and whether asking produces a better outcome than telling.


    Why coffee

    Every AI project in this scenario maps to a real category of initiative that organizations face today: back-office automation, predictive maintenance, demand forecasting, IoT-driven quality management, and customer-facing personalization. We set them inside a coffee chain because the industry is concrete, tangible, and grounded — you can picture the espresso machine going down during Saturday rush, the grandmother in the AP team wondering about her job, the barista who's been pulling perfect shots for fifteen years getting a score from a sensor.

    The technology references are real. Cropster — an Austrian company — builds exactly the kind of IoT and analytics infrastructure that Raj and Daniel are proposing. La Marzocco machines with IoT connectivity are already deployed worldwide. The vendor landscape in the scenario reflects the actual market.

    The people problems are universal. Every organization rolling out AI faces the same human dynamics: the skeptic who's seen it fail, the operator who wants to move faster than the organization can absorb, the newcomer trying to prove themselves, the craftsperson whose identity is threatened by automation, and the quiet professional who's already figured it out but needs someone to recognize that.


    Try it in May

    Cynefin in the Coffee Lab is free throughout May on RolePlays.ai. Five personas, five conversations, advanced difficulty. Each session runs about 25 minutes. You can do one or all five — the order is yours.

    Register or log in and you'll find the scenario waiting. Pick the conversation that feels most relevant to your world — or start with the one that intimidates you most.

    After each session, you'll receive detailed feedback scored against five criteria: Cynefin classification, inquiry before advocacy, coaching stance, organizational readiness, and AI stance adaptation. The feedback references specific moments from your conversation — not generic advice.

    See how you do when the PE fund wants answers in 90 days and five very different people are sitting across from you.


    If you're an L&D professional exploring how to train leaders for AI strategy conversations — not just the technology, but the human dynamics underneath — let's talk about what this looks like for your organization.