below is my cursor setup as of friday mar 13, 2026, but rewritten as a tutorial you can actually use.
i am not claiming this is the “best” setup. i am claiming this setup works for my day-to-day work, and that you can borrow the pieces that work for you.
quick answer#
three things make this setup work for me: one workspace with all my repos in it, rules and skills that tell the ai how my projects actually work, and a habit of verifying outputs before i trust them.
who this is for#
- people using cursor for technical work across multiple repositories
- teams that want higher ai throughput without losing review quality
- builders who want a practical setup they can replicate quickly
why this matters#
my goals for this post:
- explain what i use, why i use it, and what each part does
- give you enough links and steps to replicate the setup quickly
- keep it opinionated (because opinions are useful), but still practical
if you want references while reading:
machine#
first, i will admit that i am in a very good hardware situation for data + ai workflows. that helps.
my current machine:
- device:
macbook pro 16" (m2 max, 64gb ram) - os:
tahoe 26.3.1
workspace layout#
my setup is multi-repo on purpose. i work as part of a larger team with separated repos for data, backend, and ui. there are a few more infrastructure related repos, but for the most part, our work is in the three aforementioned repos, with my work being exclusive to the data repo. all of this combines into one web application.
if your work spans multiple repos, put all of them in one cursor workspace (even if you do not commit in all of them). the extra context makes agent output much more useful.
why:
- agents can reason across boundaries (for example, backend contract changes that impact frontend and data models)
- you reduce manual cross-team “did you change x?” pings
- indexing across repos improves semantic search quality
how to set this up:
- create/open your primary repo in cursor
- add other related repos via
file -> add folder to workspace... - order the repos from top to bottom in the order of which repo you use most at the top and least at the bottom (this is because semantic search scans your workspace file tree from top to bottom, so having your most used repo at the top means the search returns results from that repo first)
- save the workspace file so you can reopen the exact same context each day
references:
cursor settings i rely on (what each one does)#
these are not exotic settings. they are just my defaults.
starter settings.json:
{
"workbench.colorTheme": "Cursor Dark",
"editor.fontFamily": "JetBrains Mono, Menlo, Monaco, 'Courier New', monospace",
"editor.fontSize": 14,
"editor.fontLigatures": true,
"editor.formatOnSave": true,
"files.autoSave": "onFocusChange",
"editor.inlineSuggest.enabled": true,
"editor.minimap.enabled": false,
"editor.rulers": [100],
"terminal.integrated.defaultProfile.osx": "zsh"
}what each setting does and why i keep it:
| setting | what it does | why i use it |
|---|---|---|
workbench.colorTheme | controls editor color theme | i use cursor dark because contrast is good without being harsh on long sessions |
editor.fontFamily | picks the code font stack | i keep a clean mono font stack so rendering is predictable on any machine |
editor.fontSize | controls code text size | 14 is my readability/screen-density balance |
editor.fontLigatures | enables ligatures in supported fonts | helps quick symbol parsing, especially in sql and typescript |
editor.formatOnSave | auto-formats when saving | eliminates style drift and manual formatting overhead |
files.autoSave | auto-saves files by trigger | onFocusChange is safer than after-delay for active edits |
editor.inlineSuggest.enabled | enables inline ai suggestions | keeps quick local iteration fast |
editor.minimap.enabled | shows/hides code minimap | i disable it to reduce visual noise |
editor.rulers | vertical line markers | keeps me honest on line length |
terminal.integrated.defaultProfile.osx | chooses terminal shell profile | ensures my shell scripts and aliases behave as expected |
references:
keybindings i use constantly#
these are the shortcuts i use hundreds of times a day:
| action | mac | why this matters |
|---|---|---|
| toggle ai sidepanel | cmd+i or cmd+l | fastest path to agent context |
| command palette | cmd+shift+p | entry point for almost every cursor power feature |
reference:
model selection strategy (my opinionated default)#
my model strategy is simple:
- use the biggest, baddest model i can afford for all tasks
- again, i recognize my privilege here that i have effectively unlimited access to all of the most powerful models
your model strategy should probably be something a little more like:
- use
auto/composerfor most tasks - switch to a stronger model for architecture or debugging-heavy work
- switch to a faster/cheaper model for repetitive edits, formatting, or broad scans
this keeps quality high without letting cost run wild.
reference:
project-level ai behavior: rules, agents.md, and skills#
this is the part that made the biggest difference for me. everything above is editor configuration. everything below is what turns the ai from “generic assistant” into something that actually knows your project.
rules#
rules are persistent instructions that live in your project. i use them to encode the stuff i got tired of repeating, such as team conventions, naming patterns, and “do not do this” guardrails.
reference:
example rule (.cursor/rules/project-standards.mdc):
---
description: "project coding and validation standards"
alwaysApply: true
---
- keep edits scoped to the requested task
- follow existing naming and folder patterns
- run project validation checks for substantive changesagents.md#
AGENTS.md is a markdown file at the root of your repo. i use it for plain-language context that the agent should always have, including project architecture, how repos relate, and what skills exist.
references:
skills#
skills are step-by-step runbooks for tasks you do over and over. i have one for creating a new data model that handles the sql, the docs, the tests, and the validation summary all in one pass.
reference:
example skill (.cursor/skills/release-checklist/SKILL.md):
# release checklist
1. run lint, format, and tests
2. list any failing checks with exact remediation
3. summarize risk areas and edge cases
4. provide release notes in markdowncontext hygiene: ignore what should not be indexed#
cursor respects .gitignore, and you can add .cursorignore for extra exclusions.
this matters for two reasons:
- signal quality (so the ai is not reading garbage)
- safety (so it does not index your secrets, generated files, or vendor blobs)
reference:
starter .cursorignore:
node_modules/
dist/
build/
coverage/
*.min.js
.env*the art of the @ mention#
even with a clean index, you do not want the ai guessing which files matter. i rely heavily on explicit @ mentions in my prompts.
@Filesto pull in specific files@Foldersto give the ai a boundary to look within@Webwhen i need it to read a specific documentation page that is not in my workspace
my rule of thumb is that if i already know where the answer lives, i point the ai directly to it. do not make the ai go on a scavenger hunt.
when the ai gets stuck (troubleshooting)#
no setup prevents every failure. when the ai starts hallucinating or going in circles, here is what i actually do:
- stop arguing with it. if it fails twice on the same thing, going back and forth just fills the context window with noise. i learned this the hard way
- start a new chat. drop the baggage. open a fresh session with only the specific files you need attached
- switch models. sometimes the model is just stuck in a rut. swapping to a different one breaks the pattern more often than you would expect
mcp servers in my setup (what each one actually gives you)#
mcp is what allows cursor agents to talk to my other tools. for how i turn jira and github into the documentation-first loop with those integrations, see a practical ai workflow: jira, github, and mcp. here are the servers i use and what they actually do for me:
| mcp server | what it enables | when i use it | docs |
|---|---|---|---|
| github | repo/issue/pr context and actions | code review, pr drafting, issue triage, release hygiene | github mcp server |
| atlassian | jira + confluence context and updates | converting specs to tickets, reading decisions, status updates | atlassian rovo mcp |
| dbt labs | dbt metadata, lineage, semantic layer, dbt actions | model design, lineage checks, docs generation, build/test help | dbt mcp |
| snowflake | governed data access and mcp-native tools | query validation, analytics checks, warehouse-backed exploration | snowflake mcp |
| figma | design-system + frame context for implementation | translating designs to components with less guesswork | figma mcp server |
| slack | search/read/post capabilities in slack context | finding decisions, summarizing threads, drafting updates | slack mcp server |
a word on security:
- any mcp tool that can write or change things is real power, not a toy
- use least privilege, prefer oauth over api keys sitting in plaintext, and make sure destructive actions require you to say “yes” before they fire
dbt references i actually use#
if your setup includes dbt, these are the docs i recommend bookmarking:
theme choice#
i prefer dark everything, so my theme in cursor is cursor dark, of course
references:
replicate this setup in 45 minutes#
if you want the practical version, do this in order:
- install/update cursor from cursor downloads
- create one workspace and add all related repos
- paste the starter
settings.jsonand tweak font/theme - set your core shortcuts (
cmd+i,cmd+l,cmd+shift+p) - add one always-on project rule for coding standards
- add one skill for your most repeated workflow
- add a
.cursorignorefile - connect mcp servers one by one (github first is usually easiest)
- verify each integration with one safe test prompt
- document your final setup so your team can copy it
verification prompts you can copy#
these are good smoke tests after setup:
- “scan this workspace and summarize architecture in 10 bullets”
- “list project conventions i should follow before i edit code”
- “for this dbt project, explain lineage for model x and suggest tests”
- “summarize open engineering tickets related to this feature”
- “draft a release update message from recent merged prs”
results#
what this actually gives me day to day:
- i start tasks faster because the ai already knows my project
- i almost never retype the same instruction twice
- the ai can reason across repos, which catches things i would miss
- handoffs are cleaner and i have far fewer “oh no, i forgot about that dependency” moments
closing#
do not copy my setup 1:1. that is not the point.
the point is to try things, keep what removes friction, and ruthlessly delete what adds ceremony. the best setup is not the fanciest one. it is the one where the loop between “i want this” and “it is done and verified” stays as short as possible.
faq#
what should i set up first if i only have 20 minutes?#
one workspace with your most-used repos and your core shortcuts. then write one always-on rule so the ai knows your project constraints from the first prompt. everything else can wait.
what is the most common failure mode with this setup?#
context drift. if you do not tell the ai which files or folders matter, it burns tokens wandering around your codebase instead of solving the thing you actually asked about. explicit @ mentions fix this almost instantly.

