for each rule, skill, and/or command, remember that they are mainly read by ai. thus, ai should, at least, draft the first version. then you can dial in the details from there. but this will give you a good starting point, and most notably, it will be written in the format the ai understands the best. you can just focus on making sure the content is correct. for copy-paste templates and weak versus improved examples, see starter templates for ai rules, skills, and commands.
that does not mean handing over ownership. i still define the goals, the boundaries, and what “good” looks like. the ai just takes that guidance and turns it into structured instructions that other ai runs can actually follow without drifting.
quick answer#
since ai is the one reading and following a rule, skill, or command, let ai write the first draft. you still own the truth, the boundaries, and what counts as “done”. but the formatting and structure? ai is better at writing for ai than you are, and that is fine.
who this is for#
- teams building reusable ai instructions for repeated workflows
- people who want cleaner ai execution with fewer ambiguous outputs
- reviewers who need predictable artifact quality and safer constraints
definitions: rules vs. skills vs. commands#
before you ask ai to draft anything, it helps to know which bucket it belongs in. here is how i think about the three types:
- rules (
.mdcfiles): passive, always-on guardrails. use these for things the ai should never do, or formatting it should always use. (e.g., “always use lowercase for markdown headings”.) - skills (
SKILL.mdfiles): active, multi-step runbooks. use these for complex processes that require a specific order of operations. (e.g., “how to scaffold a new dbt model and its tests”.) - commands (
.mdfiles): trigger-based shortcuts. use these for quick, repetitive actions you want to fire off with a slash command. (e.g., “/review-pr” to run a specific review prompt.)
the core idea#
match the writer to the reader.
if ai is the one following the instructions, ai should write the first draft. in my experience it will:
- pick a format that is natural for ai to parse (
.md,.mdc) - draw sharper lines around what the task is and is not
- spell out acceptance criteria i would have left vague
- produce fewer “wait, what do i do next?” gaps
- stay more consistent when the same artifact runs multiple times (repeatable across human users)
why this works in practice#
- the format fits the reader, ai writes instructions in a shape that other ai runs parse well, which is not surprising but it matters more than people expect
- iteration is faster, you can go from rough idea to usable draft in minutes instead of agonizing over wording from a blank page
- the scaffolding is more complete, ai is annoyingly good at remembering to include constraints, edge cases, and verification steps that i would skip
- reuse gets easier, once a rule or skill format works, you can stamp out the same pattern for a different task or a different repo without starting over
where humans stay in charge#
the human still decides:
- what problem we are actually solving
- what is in scope and what is not
- what “good” looks like
- which risks are not acceptable
- whether the final version ships
the ai writes. i approve. this boundary is non-negotiable for me.
starter workflow you can use#
- write a short human brief: objective, scope, constraints, and examples
- ask ai to draft the rule/skill/command in a strict template
- review for accuracy, ambiguity, missing constraints, and unsafe assumptions
- feed corrections back and ask for a tighter second draft
- test it on one real task
- ONLY keep the artifact if the final state adds direct, consistent value
- BE CAREFUL to not create too many rules just for ceremony
example of the human brief#
this does not need to be polished. i treat it as a brain dump about what you want.
human brief: “i need a skill for adding a new api endpoint. it needs to update the openapi spec first, then create the controller, then write a failing test. do not let it touch the database layer”.
the ai will take this messy sentence and turn it into a structured markdown file with clear trigger conditions, step-by-step instructions, and explicit out-of-scope boundaries.
maintenance and decay#
ai instructions rot over time. as your project evolves, the rules that worked in march will start causing friction in august.
how to spot decay:
- the ai starts apologizing and saying “i will follow the rule next time” (but it did not this time)
- you find yourself manually overriding the ai’s output on a specific task
- the rule references files or folders that have been renamed
when this happens, do not just tweak the rule manually. feed the broken rule back to the ai, explain why it is failing, and ask it to generate an updated draft. i regularly add a note at the end of my prompts that says something like “please update any relevant documentation”.
common failure mode#
the biggest way this goes wrong is when ai starts making up policy instead of encoding your policy.
here is the thing about ai, it always has an answer. always. so if your intent is vague, the ai will not stop and ask, and it will just guess. and confident guesses that are wrong are worse than no answer at all.
my rule is to get the truth right first. let ai worry about the formatting second.
next post#
here is a follow-up with concrete starter templates plus weak-draft > improved-draft examples:
faq#
what should i optimize first: format or policy?#
policy, every time. if the policy is vague, a beautifully formatted rule just makes the wrong behavior repeat faster and with more confidence.
when should i not use this method?#
when you are still figuring out what you even want. if requirements are fuzzy, start with human conversation. do not convert anything into a structured artifact until you actually know where the walls are.



