Writing an AI Policy Your Team Will Actually Follow

· By Peter Lowe

Category: Governance

Writing an AI Policy Your Team Will Actually Follow

Most SME AI policies are either a 30-page template nobody reads or a single Slack message. Here's how to write a practical one-page policy your team will actually follow.

Most SME AI policies fall into one of two camps. The first is a thirty-page document, lifted from a template, full of definitions and risk matrices and references to ISO standards. Nobody reads it. It sits in a SharePoint folder doing nothing. The second is a single Slack message that says “be careful with ChatGPT, don’t put client data in it.” That’s it. That’s the policy. Neither protects the business. And in most SMEs I work with, one of those two is what passes for AI governance. The good news is that a useful AI policy for an SME is much shorter, much simpler, and much more practical than the corporate templates suggest. You can write one in an afternoon. The harder part is making sure your team actually follows it. Why an SME needs a policy at all Let me get the “why” out of the way quickly, because most owners I talk to assume policies are a big-company concern. There are three concrete reasons an SME needs something written down. The first is client data. Your team is almost certainly already pasting client information into AI tools — emails, briefs, contracts, meeting notes — without thinking twice. Some of those tools train on what you paste in. Some don’t. Most of your team has no idea which is which. That’s a confidentiality problem waiting to become a contractual one. The second is intellectual property and copyright. AI tools can produce content that closely resembles existing work, sometimes verbatim. If a team member uses an AI-generated image in a client deliverable and it turns out to be derivative of a copyrighted source, the liability lands with you, not the tool. The third is the one most people miss: a policy gives your team confidence to use AI properly. Without one, careful people avoid the tools entirely and the business misses out, while less careful people use them in ways that create exposure. A clear policy removes the guesswork in both directions. What to include — the SME-sized version Forget the templates. Here’s what actually belongs in an SME AI policy. An approved tools list. Name the tools your team is allowed to use, by name. ChatGPT (paid version, with training disabled). Claude. Microsoft Copilot. Whatever you’ve decided is acceptable. Anything not on the list requires a conversation before use. This single section prevents most of the shadow-AI problems I see. What data is never allowed in a public AI tool. Be specific. Client names. Financial figures. Anything covered by an NDA. Personal data of any kind. Internal pricing. Unreleased product information. The clearer you are, the easier the rule is to follow. The disclosure rule. When does the team need to tell a client that AI was involved in their work? My recommendation: always for substantive content (writing, analysis, design), not necessary for assistive use (spell-check, summarising your own notes, brainstorming). Pick a position and write it down, because individual team members will land in different places if you don’t. Output ownership and verification. Who is responsible for checking AI-generated work before it goes out? The answer should always be the human whose name is on it. Make that explicit. AI is a draft, not a deliverable. The incident process. What happens if something goes wrong — wrong information sent to a client, sensitive data pasted into the wrong tool, a copyright concern raised? Two sentences will do. Tell whoever handles it, don’t try to hide it, log it so you can learn from it. That’s the policy. Five sections. One page if you keep the formatting tight, two at most. What to leave out This is the part where most SME policies go wrong. They include things that sound important but actively harm adoption. Leave out generic AI ethics statements. They read like corporate filler and your team will skip past them. Leave out long definitions. If you have to define “large language model” before getting to the rules, you’ve already lost the reader. Leave out anything that reads like a legal document. The moment a policy starts using “shall” and “the Company”, people stop treating it as practical guidance. Leave out blanket bans. “No AI use without written approval from a director” sounds responsible. In practice it pushes everything underground. People will use the tools anyway, just without telling you. You’ve created the worst of both worlds. How to roll it out Writing the policy is the easy bit. Getting it followed is where most SMEs trip up. One page maximum. I’ve said it twice already because it’s the single most important rule. If it runs to two pages, cut something. Walk the team through it in person. Don’t email it. Sit down for thirty minutes, read it together, take questions. The conversation matters more than the document. People remember what they discussed, not what they were sent. Pair it with practical examples. Show what a good AI use looks like. Show what a bad one looks like. Real examples from your own business if you have them, made-up ones if you don’t. Abstract rules don’t stick. Specific examples do. Review it quarterly. The tools change every few months. A policy written in January will already feel out of date by April. Put a recurring calendar entry in now. Make it easy to ask questions. The policy should name a person — usually you, in an SME — who any team member can ask without feeling stupid. “I’m not sure if I can use this tool for that task” is exactly the conversation you want to be having. The trap to avoid The biggest mistake I see is treating an AI policy as a one-off compliance exercise. Write it, send it, file it, done. That’s not how this works. The tools are evolving fast, your team’s usage is evolving fast, and the risks are evolving with them. A policy is a living document or it’s nothing. Treat it the way you’d treat a health and safety policy in a workshop. Reviewed regularly, talked about openly, updated when something changes, and taken seriously because everyone understands why it exists. The practical takeaway You don’t need a big policy. You need a clear one. Five sections, one page, written in plain English, walked through in person, reviewed every quarter. That’s enough to protect the business and give your team the confidence to use AI properly. If you’d like a copy of the one-page AI policy template Smart AI Studio uses with clients, drop me a message on LinkedIn and I’ll send it over.