in how to use ai to create ai rules, skills, and commands, i made the case for having ai draft any artifacts that would be primarily used by ai.
this post is the practical follow-up with generic templates you can copy, then adapt to your own constraints.
i kept the examples generic on purpose so you can drop them into whatever stack you are working in.
quick answer#
tighter templates produce more consistent ai outputs. every time i add scope boundaries, ordered steps, and a clear definition of “done”, the same prompt works better across runs. the difference is not subtle.
who this is for#
- teams writing reusable ai operating instructions
- people seeing inconsistent results from weak prompt artifacts
- reviewers who want predictable outputs with clearer acceptance checks
how to use this post#
for each artifact type, i show:
- a weak draft that usually creates inconsistent ai behavior
- an improved draft that adds scope, constraints, and done criteria
- a short explanation of why the quality improved in the second iteration
template 1: rule files#
weak draft (rule file)#
# coding standards
- follow team conventions
- run checks before finishingimproved draft (rule file)#
---
description: "core implementation standards"
alwaysApply: true
---
# intent
keep changes safe, scoped, and verifiable
## scope
- in scope: files directly related to the requested task
- out of scope: broad refactors, unrelated cleanup, dependency upgrades unless explicitly requested
## required behavior
- preserve existing naming and architecture patterns
- avoid modifying unrelated files
- run validation checks for substantive edits
- if a check fails, report the exact failure and likely cause
## output requirements
- explain what changed and why
- list validations executed and outcomes
- call out remaining risks or assumptionswhat improved (rule file)#
the second draft draws a line around what the ai should and should not touch, makes it run checks instead of just hoping for the best, and gives reviewers a predictable format which makes (human) scanning faster, easier, and more accurate.
template 2: skill files#
weak draft (skill file)#
# implement feature skill
1. read the task
2. implement the feature
3. test and finishimproved draft (skill file)#
# scoped feature implementation skill
## when to use
- use when a feature request has clear acceptance criteria
- do not use for open-ended brainstorming
## required inputs
- objective
- acceptance criteria
- constraints (performance, security, compatibility)
## workflow
1. restate scope and assumptions
2. inspect relevant code paths
3. implement the smallest complete change
4. add or update focused tests for changed behavior
5. run validation checks
6. summarize outcomes, risks, and follow-ups
## done criteria
- every acceptance criterion is mapped to an implementation result
- validations pass or known failures are documented
- no unrelated files are changedwhat improved (skill file)#
the second draft tells the ai when to use this skill, what information it needs upfront, what order to work in, and how to know when it is finished. that last part, done criteria, is the one most people skip, and it is the one that matters most.
template 3: command files#
weak draft (command file)#
implement a feature.
do the work and test it.improved draft (command file)#
implement a scoped feature with verification.
required inputs:
- objective
- scope
- constraints
workflow:
1. restate scope before editing
2. implement minimal complete change
3. add or update focused tests
4. run validation checks
5. return concise change summary and risks
required behavior:
- modify only files needed for this objective
- avoid unrelated refactors unless requested
- keep interfaces stable unless requirement says otherwise
response format:
- scope confirmation
- files changed with rationale
- validation results
- open risks or follow-upswhat improved (command file)#
the vague version is basically “go do stuff”. the second version tells the ai exactly what it needs, how to work, and what shape the answer should take. it reads like a contract.
template 4: AGENTS.md#
AGENTS.md is your project’s constitution. it sits at the root of your workspace and gives the ai the broad context it needs before it reads any specific rules or skills.
weak draft (AGENTS.md)#
# project info
this is a react and node app. we use postgres. write clean code.improved draft (AGENTS.md)#
# project context
this is the `my-repo` workspace. it is a repository containing our data warehouse, backend services, and frontend ui.
## architecture
- frontend: next.js (react)
- backend: nestjs (node)
- data: dbt on snowflake
## core principles
1. the backend is the source of truth for business logic
2. data models must mirror backend entity definitions
3. do not duplicate types across boundaries, and use the shared schema package
## available skills
- use `create-dbt-model` when adding new analytics tables
- use `sync-service-entity` when a backend migration affects the data warehousewhat improved (AGENTS.md)#
the second version gives the ai a map. instead of “write clean code” (which means nothing), it explains how the pieces of the codebase relate and tells the ai which skills to reach for when it hits common tasks. think of it as onboarding documentation, except the new hire is an llm.
how to test your new templates#
do not just commit a new rule and hope for the best. test it (duh).
- the happy path test: ask the ai to perform a standard task using the new skill. does it follow the steps in order? does it output the correct format?
- the boundary test: ask the ai to do something explicitly forbidden by the rule’s scope (e.g., “refactor the database connection while you add this button”). a good rule will cause the ai to refuse the out-of-scope work.
- the failure test: introduce a deliberate syntax error in a file, then ask the ai to run its validation step. does it catch the error and report it according to your failure reporting expectations?
if the ai fails any of these tests, your constraints are not tight enough. go back to the template and make the boundaries more explicit.
weak draft to improved draft checklist#
when you improve any ai-facing artifact, check these six upgrades:
- define intent in one line
- define scope boundaries explicitly
- require ordered execution steps
- define done criteria
- standardize output format
- include failure reporting expectations
closing note#
if you only take one thing from this post, make it scope boundaries. in my experience, almost every time the ai does something weird, the root cause is not the model. it is that nobody told the model where the walls are, so it makes up its own answer because, as we have discussed before, ai ALWAYS has an answer.
faq#
should i start with a rule, skill, or command template?#
whichever one you reach for most often. for most people that is rules, because a good rule starts paying off immediately on every single run.
how do i know a template is ready for production use?#
run it on a real task three ways:
- the normal path
- something out of scope
- something deliberately broken
if the ai handles all three the way you would want, keep it. if not, the boundaries need tightening.



