i spend a lot of time in cursor with an ai assistant beside me, using the workspace and mcp layout i described in my cursor setup. for a while i thought the path to better outputs was better prompts. it was not. the thing that actually moved the needle was connecting the assistant to the systems my team already uses as source of truth like jira for issue tracking and github for prs and code review.
quick answer#
i use model context protocol (mcp) servers for jira and github inside my editor. i start from templates (that i created with ai assistance), have the ai draft a proper issue from my rough notes, and then reuse that same narrative in the pull request with cross-links. the documentation sounds like overhead, but it is actually what makes later ai sessions work better, and every future model reads from the same source of truth instead of guessing.
who this is for#
- engineers already living in jira and github who are tired of copy-pasting between tabs
- anyone doing multi-step ai work where the second session has no idea what the first one decided
- teams that care about reviewable intent, not just the final diff
the loop in plain terms#
i set up mcp integrations so the assistant can create and update issues and pull requests without me retyping metadata.
i describe what i want in rough bullets:
- goals
- risks
- open questions
- links
the ai maps that onto my issue template so the fields read like something a human reviewer would thank me for.
i point the same assistant (or a fresh chat) at the issue as the source of truth while it implements. the issue holds scope, acceptance checks, and explicit out-of-scope notes.
when the branch is ready, i have the ai open or update the pull request using a second template which will document everything including the summary, how to test, risk, and a link back to the issue.
the whole idea is one narrative in the places my team already looks, instead of three slightly different versions of the story scattered across slack, a comment thread, and a scratch file on my desktop.
why templates matter more with ai#
i know templates look bureaucratic. but a template is really just a schema. the ai fills in structured sections way more reliably than it invents structure on its own from a blank page. my templates bake in what “done” means, what reviewers should check, and where the links go.
and another point on the schema that a template creates is that if it is used by everyone in the team, to reiterate a point from starter templates for ai rules, skills, and commands, the common, constant format makes human reading and reviewing much easier, faster, and accurate.
when the ai drafts the issue first, this all leads to something i can actually read and edit in seconds, with a clear title, scoped description, and test notes, instead of staring at a blank text box trying to remember what i, or my teammates, meant.
mcp as the glue#
without live tool access, the assistant is guessing for any information that you did not explicity provide. it does not know the ticket number, the labels, or whether a pull request already exists. with mcp, it can actually look those things up, apply the template, and keep identifiers straight.
that sounds small, but it kills an entire category of mistakes like wrong links, stale titles, descriptions that drifted from what the code actually does. reviewers see consistent intent instead of hunting for mismatches, and the next ai session can re-fetch the real object in its current state instead of relying on whatever was in chat history.
the meta move: let the ai brief the next ai#
here is the single habit that made the biggest difference. before i switch contexts (new chat, sub-agent, long implementation pass), i ask the current model to document everything that was done in the jira issue. i treat that block like a handoff document between shifts. then, subsequent iterations can read the issue to see what was done, were it was left, and know what it needs to do next to continue the task (because it was well-defined with done criteria at the start because of the template).
the ai is surprisingly good at this. it takes a messy conversation and compresses it into a clean set of instructions that the next session can pick up and run with. no rediscovering the plan, no “where was i?”.
trade-offs i accept#
- setting up mcp and templates takes real time upfront, but it pays for itself after just a few real tasks
- i still read every issue and pull request before it goes out. the ai writes the first draft, and i decide if it ships
- for a two-line fix, i skip the full ceremony. that is fine. not everything needs a process
faq#
does this slow you down on tiny changes?#
sometimes, yes. for a trivial low-risk change i skip the heavy template. but if there is any chance someone (including future me) will need to understand what happened six months from now, i use the full loop.
what if the issue and the code drift apart?#
i treat that as a bug. if scope changes during implementation, i update the issue first, then the pull request. the ai can actually do that sync for me if i ask it to, for example “update the issue description to match what the code actually does now”.



