Skip to main content
  1. posts/

my cursor setup

·10 mins· loading ·
 Author
Author
philip mathew hern
philliant
Table of Contents
cursor - This article is part of a series.
Part : This Article

below is my cursor setup as of friday mar 13, 2026, but rewritten as a tutorial you can actually use.

i am not claiming this is the “best” setup. i am claiming this setup works for my day-to-day work, and that you can borrow the pieces that work for you.

quick answer
#

three things make this setup work for me: one workspace with all my repos in it, rules and skills that tell the ai how my projects actually work, and a habit of verifying outputs before i trust them.

who this is for
#

  • people using cursor for technical work across multiple repositories
  • teams that want higher ai throughput without losing review quality
  • builders who want a practical setup they can replicate quickly

why this matters
#

my goals for this post:

  • explain what i use, why i use it, and what each part does
  • give you enough links and steps to replicate the setup quickly
  • keep it opinionated (because opinions are useful), but still practical

if you want references while reading:

machine
#

first, i will admit that i am in a very good hardware situation for data + ai workflows. that helps.

my current machine:

  • device: macbook pro 16" (m2 max, 64gb ram)
  • os: tahoe 26.3.1

workspace layout
#

my setup is multi-repo on purpose. i work as part of a larger team with separated repos for data, backend, and ui. there are a few more infrastructure related repos, but for the most part, our work is in the three aforementioned repos, with my work being exclusive to the data repo. all of this combines into one web application.

if your work spans multiple repos, put all of them in one cursor workspace (even if you do not commit in all of them). the extra context makes agent output much more useful.

why:

  • agents can reason across boundaries (for example, backend contract changes that impact frontend and data models)
  • you reduce manual cross-team “did you change x?” pings
  • indexing across repos improves semantic search quality

how to set this up:

  1. create/open your primary repo in cursor
  2. add other related repos via file -> add folder to workspace...
  3. order the repos from top to bottom in the order of which repo you use most at the top and least at the bottom (this is because semantic search scans your workspace file tree from top to bottom, so having your most used repo at the top means the search returns results from that repo first)
  4. save the workspace file so you can reopen the exact same context each day

references:

cursor settings i rely on (what each one does)
#

these are not exotic settings. they are just my defaults.

starter settings.json:

{
  "workbench.colorTheme": "Cursor Dark",
  "editor.fontFamily": "JetBrains Mono, Menlo, Monaco, 'Courier New', monospace",
  "editor.fontSize": 14,
  "editor.fontLigatures": true,
  "editor.formatOnSave": true,
  "files.autoSave": "onFocusChange",
  "editor.inlineSuggest.enabled": true,
  "editor.minimap.enabled": false,
  "editor.rulers": [100],
  "terminal.integrated.defaultProfile.osx": "zsh"
}

what each setting does and why i keep it:

settingwhat it doeswhy i use it
workbench.colorThemecontrols editor color themei use cursor dark because contrast is good without being harsh on long sessions
editor.fontFamilypicks the code font stacki keep a clean mono font stack so rendering is predictable on any machine
editor.fontSizecontrols code text size14 is my readability/screen-density balance
editor.fontLigaturesenables ligatures in supported fontshelps quick symbol parsing, especially in sql and typescript
editor.formatOnSaveauto-formats when savingeliminates style drift and manual formatting overhead
files.autoSaveauto-saves files by triggeronFocusChange is safer than after-delay for active edits
editor.inlineSuggest.enabledenables inline ai suggestionskeeps quick local iteration fast
editor.minimap.enabledshows/hides code minimapi disable it to reduce visual noise
editor.rulersvertical line markerskeeps me honest on line length
terminal.integrated.defaultProfile.osxchooses terminal shell profileensures my shell scripts and aliases behave as expected

references:

keybindings i use constantly
#

these are the shortcuts i use hundreds of times a day:

actionmacwhy this matters
toggle ai sidepanelcmd+i or cmd+lfastest path to agent context
command palettecmd+shift+pentry point for almost every cursor power feature

reference:

model selection strategy (my opinionated default)
#

my model strategy is simple:

  • use the biggest, baddest model i can afford for all tasks
  • again, i recognize my privilege here that i have effectively unlimited access to all of the most powerful models

your model strategy should probably be something a little more like:

  • use auto / composer for most tasks
  • switch to a stronger model for architecture or debugging-heavy work
  • switch to a faster/cheaper model for repetitive edits, formatting, or broad scans

this keeps quality high without letting cost run wild.

reference:

project-level ai behavior: rules, agents.md, and skills
#

this is the part that made the biggest difference for me. everything above is editor configuration. everything below is what turns the ai from “generic assistant” into something that actually knows your project.

rules
#

rules are persistent instructions that live in your project. i use them to encode the stuff i got tired of repeating, such as team conventions, naming patterns, and “do not do this” guardrails.

reference:

example rule (.cursor/rules/project-standards.mdc):

---
description: "project coding and validation standards"
alwaysApply: true
---

- keep edits scoped to the requested task
- follow existing naming and folder patterns
- run project validation checks for substantive changes

agents.md
#

AGENTS.md is a markdown file at the root of your repo. i use it for plain-language context that the agent should always have, including project architecture, how repos relate, and what skills exist.

references:

skills
#

skills are step-by-step runbooks for tasks you do over and over. i have one for creating a new data model that handles the sql, the docs, the tests, and the validation summary all in one pass.

reference:

example skill (.cursor/skills/release-checklist/SKILL.md):

# release checklist

1. run lint, format, and tests
2. list any failing checks with exact remediation
3. summarize risk areas and edge cases
4. provide release notes in markdown

context hygiene: ignore what should not be indexed
#

cursor respects .gitignore, and you can add .cursorignore for extra exclusions.

this matters for two reasons:

  • signal quality (so the ai is not reading garbage)
  • safety (so it does not index your secrets, generated files, or vendor blobs)

reference:

starter .cursorignore:

node_modules/
dist/
build/
coverage/
*.min.js
.env*

the art of the @ mention
#

even with a clean index, you do not want the ai guessing which files matter. i rely heavily on explicit @ mentions in my prompts.

  • @Files to pull in specific files
  • @Folders to give the ai a boundary to look within
  • @Web when i need it to read a specific documentation page that is not in my workspace

my rule of thumb is that if i already know where the answer lives, i point the ai directly to it. do not make the ai go on a scavenger hunt.

when the ai gets stuck (troubleshooting)
#

no setup prevents every failure. when the ai starts hallucinating or going in circles, here is what i actually do:

  1. stop arguing with it. if it fails twice on the same thing, going back and forth just fills the context window with noise. i learned this the hard way
  2. start a new chat. drop the baggage. open a fresh session with only the specific files you need attached
  3. switch models. sometimes the model is just stuck in a rut. swapping to a different one breaks the pattern more often than you would expect

mcp servers in my setup (what each one actually gives you)
#

mcp is what allows cursor agents to talk to my other tools. for how i turn jira and github into the documentation-first loop with those integrations, see a practical ai workflow: jira, github, and mcp. here are the servers i use and what they actually do for me:

mcp serverwhat it enableswhen i use itdocs
githubrepo/issue/pr context and actionscode review, pr drafting, issue triage, release hygienegithub mcp server
atlassianjira + confluence context and updatesconverting specs to tickets, reading decisions, status updatesatlassian rovo mcp
dbt labsdbt metadata, lineage, semantic layer, dbt actionsmodel design, lineage checks, docs generation, build/test helpdbt mcp
snowflakegoverned data access and mcp-native toolsquery validation, analytics checks, warehouse-backed explorationsnowflake mcp
figmadesign-system + frame context for implementationtranslating designs to components with less guessworkfigma mcp server
slacksearch/read/post capabilities in slack contextfinding decisions, summarizing threads, drafting updatesslack mcp server

a word on security:

  • any mcp tool that can write or change things is real power, not a toy
  • use least privilege, prefer oauth over api keys sitting in plaintext, and make sure destructive actions require you to say “yes” before they fire

dbt references i actually use
#

if your setup includes dbt, these are the docs i recommend bookmarking:

theme choice
#

i prefer dark everything, so my theme in cursor is cursor dark, of course

references:

replicate this setup in 45 minutes
#

if you want the practical version, do this in order:

  1. install/update cursor from cursor downloads
  2. create one workspace and add all related repos
  3. paste the starter settings.json and tweak font/theme
  4. set your core shortcuts (cmd+i, cmd+l, cmd+shift+p)
  5. add one always-on project rule for coding standards
  6. add one skill for your most repeated workflow
  7. add a .cursorignore file
  8. connect mcp servers one by one (github first is usually easiest)
  9. verify each integration with one safe test prompt
  10. document your final setup so your team can copy it

verification prompts you can copy
#

these are good smoke tests after setup:

  • “scan this workspace and summarize architecture in 10 bullets”
  • “list project conventions i should follow before i edit code”
  • “for this dbt project, explain lineage for model x and suggest tests”
  • “summarize open engineering tickets related to this feature”
  • “draft a release update message from recent merged prs”

results
#

what this actually gives me day to day:

  • i start tasks faster because the ai already knows my project
  • i almost never retype the same instruction twice
  • the ai can reason across repos, which catches things i would miss
  • handoffs are cleaner and i have far fewer “oh no, i forgot about that dependency” moments

closing
#

do not copy my setup 1:1. that is not the point.

the point is to try things, keep what removes friction, and ruthlessly delete what adds ceremony. the best setup is not the fanciest one. it is the one where the loop between “i want this” and “it is done and verified” stays as short as possible.

faq
#

what should i set up first if i only have 20 minutes?
#

one workspace with your most-used repos and your core shortcuts. then write one always-on rule so the ai knows your project constraints from the first prompt. everything else can wait.

what is the most common failure mode with this setup?
#

context drift. if you do not tell the ai which files or folders matter, it burns tokens wandering around your codebase instead of solving the thing you actually asked about. explicit @ mentions fix this almost instantly.

references
#

related reading#

cursor - This article is part of a series.
Part : This Article