Innovative Compass
AI Workshop

Full team using AI within two weeks: a workshop for a 22-person marketing agency

How a full-day AI workshop built around real client workflows moved a 22-person agency from scattered experimentation to consistent, measurable use — including the skeptics.

Industry · Marketing Agency Team size · 22 people Service · AI Workshop Completed · April 2026
22
attendees — 100% adoption within 2 weeks
10–15%
efficiency gain on content and reporting
2wk
to full team adoption post-workshop
6
reusable prompt templates shipped on day one
22/22
AI Workshop · Marketing Agency

The Situation

The MD of this 22-person agency described the problem clearly in our first call: "I know AI is something we should be using. I just don't know what that means for us specifically." That is an honest starting place, and a more useful one than the alternative — teams that think they have figured it out because a few people have been playing with ChatGPT for a few months.

That was roughly the situation here. A handful of people, mostly on the junior end, had been using ChatGPT on their own initiative. Results were inconsistent. Some were getting useful drafts for client copy; others had tried it once, got something mediocre, and stopped. Nobody had shared what was working. There were no shared prompts, no common process, no agreed-upon view of where AI fit into the agency's actual workflows. Two people had invested real time trying to make it work and had given up after it failed to meet the bar they needed for client-facing output.

The senior team had a different problem. Several of them were openly skeptical — not hostile, just unconvinced. They had seen the demos and read the headlines, and their honest view was that AI could not do what they did. That is not an unreasonable position when your only frame of reference is bad outputs from a tool you have not been trained to use. The problem is that skepticism at the senior level shapes the culture of the whole team. When the people who set the standard for quality are not using a tool, the people who might benefit from it feel like they are doing something slightly off-brand when they do.

Leadership's goal was clear: they wanted the whole team using AI in a way that was consistent, measurable, and did not produce outputs that needed to be thrown away. They wanted to close the gap between the enthusiastic early adopters and the people who had written it off. And they wanted to do it without disrupting client work for more than a day. That last constraint shaped the format: one full day on-site, with follow-up support for two weeks afterward.

The Approach

We do not run generic AI workshops. The fastest way to lose a room of experienced practitioners is to spend forty minutes explaining what a large language model is. These are professionals with real work to do; what they want to know is whether this tool can help them do it better and, specifically, how. So we do not start with the technology — we start with the work.

Before the workshop, we sent a short pre-survey to all 22 attendees. The core question: what are the three tasks in your week that take the most time but feel the most mechanical? The answers clustered predictably. First drafts of client-facing copy. Internal status reports and summaries. Research and briefing documents. Reformatting and adapting existing content for different audiences or formats. Those four categories covered about 70% of the time people mentioned, and they became the backbone of the curriculum. Nothing in the day was generic; every example mapped to something the team actually did.

We also ran a 30-minute call with the MD and two senior team members a week before the workshop. The goal was not to pitch them on AI but to understand what good looked like for this team — what the bar was for client work, where the real quality bottlenecks were, and which workflows they were most worried about. That call changed the curriculum in one specific way: we added a dedicated section on how to review and edit AI output rather than just how to generate it. That section turned out to be the one senior staff found most useful, and it addressed the core concern they had going in — that using AI meant accepting lower-quality output.

The day itself was structured in two halves. The morning was teaching: prompt fundamentals, how to give context, how to structure a request so the output is actually useful, and the common failure modes that produce bad results and how to avoid them. The afternoon was hands-on practice in pairs, working on real briefs from the agency's current client work. We circulated throughout the afternoon to work individually with people who were stuck, unconvinced, or asking the more specific questions that never surface in a group setting. The one-on-one time in the afternoon is consistently where the most progress happens for the people who start the day most resistant.

What We Covered

Prompt engineering was the core, but framed specifically for an agency context. Not how to write a prompt in the abstract, but how to write a prompt that produces a usable first draft of a client email — which means understanding what context Claude needs, what tone and format constraints to specify, and how to give it enough of the brief to produce something on-brief without making the prompt itself unmanageable. We built every example around real agency deliverable types: client email updates, social caption variants, campaign brief summaries, internal end-of-week reports.

The section on reviewing and editing AI output was equally important, and it is the piece most often skipped in AI training. The failure mode we see most often is not that people cannot generate output — it is that they do not know how to efficiently close the gap between what AI produces and what can actually go out the door. We walked through a practical framework: read for accuracy first, then for voice, then for client-fit. Where to push the AI for revisions versus where to just edit the text directly. How to build feedback into the prompt for next time rather than correcting the same issue in every session.

We spent a full session on building repeatable prompt templates — short, structured prompts for recurring task types that anyone on the team can use consistently without starting from scratch each time. By the end of the afternoon, the team had built six templates together: one for client status updates, one for social caption variants from a brief, one for internal meeting summaries, one for reformatting long-form content into shorter formats, one for generating initial research outlines, and one for writing agency case study drafts. All six were shared in the team's Notion before people left the building.

We also covered the practical difference between Claude and ChatGPT — not as a product comparison but as a workflow decision. Claude handles long documents and nuanced tone instructions better for the kind of work this team does most often. ChatGPT's plugin ecosystem has different strengths. Giving people a clear, simple decision rule reduces the cognitive load of choosing and increases the chance they actually open the tools. The goal is not to make people experts in AI tooling — it is to make the right tool feel obvious for the task in front of them.

The Results

The clearest measure of a workshop's success is not what happens in the room — it is what happens in the two weeks after it. In this case, the MD reported that all 22 attendees were actively using AI tools within two weeks of the workshop. Not in a tried-it-once sense, but in a this-is-now-part-of-how-I-do-this-task sense. That is the difference between exposure and adoption, and it is the metric that actually matters for an organisation trying to change how work gets done.

The six prompt templates built during the afternoon session were adopted immediately and iterated on within the first week. Two team members independently sent in variations they had developed on the originals — refinements based on specific client tone requirements that their accounts needed. That is the right kind of follow-on: the team treating the templates as a starting point to improve rather than a fixed artifact to preserve. It is a sign that the underlying principles landed, not just the templates themselves.

The efficiency gain on content and reporting workflows came in at around 10 to 15% based on time estimates the team self-reported in a two-week follow-up check-in. That is a real number, but a conservative one — the bigger gains tend to compound over the following months as people get faster and more confident with the tools. First-draft time on client copy was down noticeably. Status report writing, which the team had uniformly described as a necessary evil, had become faster and meaningfully less effortful for most people.

The part that surprised the MD most was which people became the most engaged. The two most skeptical senior team members — the ones who had been most explicitly unconvinced going into the day — were among the first to send follow-up questions after the workshop. One of them built a custom prompt template for a specific client account they manage and shared it with the rest of the account team without being asked. The skeptics did not need to be persuaded. They needed to be shown something specific enough to be useful to them. Once they had that, the skepticism resolved on its own.

"The people I thought would push back the hardest were the ones emailing questions the next morning."

Similar results for your team?

Every workshop we run is built around your team's actual workflows — not generic AI 101. Book a call and we'll tell you what that would look like for your team.

Book a Call