Your team needs time to play with AI
By Author
Alexei Dunaway
Reading Time
6
mins
Date
October 21, 2025
Share

Your team needs time to play with AI

When learning a new skill, training hours matter. BCG found that 79% of employees receiving more than five hours of AI training become regular users, compared to 67% with less exposure. Yet only 36% of employees feel adequately trained today.

Why? Beyond the time logged, the challenges with AI training have little to do with employees not understanding the technology. The real issue is how we're approaching the learning itself, treating AI like Excel when it behaves more like the internet.

You learn the internet by using it, not studying it

Nobody learned the internet from a training guide. There were no courses on "How to Use Search Engines". You opened a browser and clicked around until things made sense (with guardrails of course).

AI represents the same kind of general-purpose technology, the first since the internet to reshape every job. The pace is two to three times faster. Companies are responding with slide decks, video modules, and prompt engineering guides that become obsolete the moment they're published.

Everyone knows AI matters. The challenge is that employees are drowning in their actual jobs. They lack time to "upskill" between meetings, and they certainly lack mental bandwidth for another compliance-style training they have to click through.

What they need is protected time to play. 

Play as in the mode of learning that actually works: experiential, low-stakes, and social. The 70-20-10 model has proven this for decades: 70% of learning comes from hands-on challenges, 20% from peer relationships, and 10% from formal coursework. Most companies invert the ratio, front-loading the 10% and wondering why nothing sticks.

Messiness is how you get to familiarity

Wade Foster, CEO of Zapier, has been remarkably consistent on this point: "If you do one thing, just keep doing hackathons." Regular, recurring hackathons where teams can experiment, duplicate efforts, create chaos, and occasionally break things.

This advice makes executives uncomfortable. What about standards? What about duplication? What about governance?

Here's the reality: messiness is the price of adoption. You cannot get 10,000 or 100,000 employees comfortable with AI by having three people in IT figure it out first and then cascade "best practices" down. That approach produces theoretical knowledge and zero muscle memory. Transformation scales through experimentation and social proof, not documentation.

Yes, some teams will build the same thing twice. Yes, someone will use AI for something that doesn't quite work yet. Yes, there will be redundancy. That's hundreds of people moving from theory to intuition, from watching to doing, from skepticism to fluency.

Training professionals become enablers

The role of L&D fundamentally shifts when learning moves from instruction to enablement. 

The focus changes from information delivery to creating conditions for play and learning looks like building sandboxes where people can experiment safely without breaking production systems. It means setting constraints that spark creativity: time-boxed challenges, clear outcomes, defined guardrails. It involves facilitating peer sharing so discoveries spread horizontally instead of only top-down. And it requires sustaining the habit through rituals, recognition, and visible progress.

HubSpot shows what this looks like in practice. They run GrowDAI, a two-day intensive on AI skills, but the real work happens in MondAI Minute, weekly peer sharing sessions where people show what they've built. Programs like GrowDAI and MondAI Minute create space for skill-building, not just adoption. Internal hackathons and "tiger teams" reinforce the habit.

Constraints unlock creativity

Here's the paradox: play works best with structure. When you tell people to "go learn AI," nothing happens. The ask is too abstract, too open-ended, and too easy to deprioritize. Give them a focused challenge, a short timeframe, and a clear outcome, and progress accelerates.

Accenture's GitHub Copilot rollout demonstrates this perfectly. Kristine Steinman, Global IT GitHub Copilot Adoption Lead, was direct: "Hands-on experience is crucial; without this, progress stalls." They created the GitHub Copilot Galaxy Passport program, a gamified journey from training and certification to joining the Aviators Network, hackathons, and events.

Alexandra Ancy, Change Management Lead, explained the design: layers of recognition, community, and incremental challenges. Hilary Winiarz, Product Area Lead, captured the result: "That gamification, that excitement people get when their name is on a leaderboard, is huge."

Consumer apps have already proved this works

Duolingo built a billion-dollar business by making learning feel like a game. Their badge system drove a 116% jump in referrals. A tiny progress indicator on their app icon measurably lifted daily activity.

Corporate training can use the same mechanics. Streaks for daily practice. Lightweight challenges that take minutes instead of hours. Visible progress cues so people see themselves improving. Recognition for useful contributions that others can build on. These design patterns sustain behavior change at scale.

The risks are manageable

Unsupervised experimentation can lead to shadow IT, data leaks, or hallucination-filled outputs getting treated as fact. Early enthusiasm can fade without structures to sustain it. Some teams will get left behind while others race ahead. These are real risks. They're also design challenges with known solutions: clear guardrails (sandbox environments, approved use cases, data handling rules), recurring rituals (weekly demos, monthly hackathons, public leaderboards), and structured peer support (buddy systems, communities of practice, internal champions).

Play accelerates learning. It lowers psychological barriers. It generates social proof at speed. Most importantly, it gives people the one thing training decks never will: confidence born from doing.

Give your team time. Give them constraints. Give them space to be messy. Watch what happens when you enable the play instead of trying to control the learning.

The internet didn't offer a step-by-step manual. AI doesn't either.

See Pascal in action.

We've got answers here, and if you need more help, feel free to reach out to us through our contact form.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.