intermediate ⏱ 45 minutes 10 min read

Cursor 3 Agents Window — How to Set It Up and Actually Use It

Cursor 3 shipped a parallel agent workspace on April 2, 2026. Here's how to configure the Agents Window, run multi-repo agents, and decide when NOT to use it.

cursorai-codingagentsidedeveloper-tools Apr 5, 2026
prerequisites
  • Cursor 3.0 installed
  • Git installed and configured
  • A project with at least basic test coverage (required for parallel agents)
tools used
last tested
2026-04-05

TL;DR

  • Activate the Agents Window with Cmd/Ctrl+Shift+P → Agents Window — your existing IDE stays open and accessible
  • Use /worktree before starting any parallel agent work — without it, agents will overwrite each other’s changes
  • Design Mode (Cmd+Shift+D / Ctrl+Shift+D) accelerates single-component visual feedback; it breaks down on stateful interactions and multi-select flows
  • Cloud agents produce demos and screenshots for verification; pull to local for refinement without losing session context
  • Do NOT use the Agents Window for single-file edits, exploratory refactors with undefined scope, or projects without CI coverage

Most developers who upgraded to Cursor 3 spent five minutes in the Agents Window and went back to Composer. Not because the setup is hard — it isn’t — but because running parallel agents requires a different mental model before you touch a single keyboard shortcut. The interface is the easy part. Knowing what to hand off, and how to review it when it comes back, is the actual problem.

This guide is for senior engineers who want to understand what they’re handing off before they hand it off. We’ll go through exact setup steps, the configuration that keeps parallel agents from clobbering each other, the /worktree isolation model, Design Mode’s real scope, and the specific scenarios where the Agents Window creates more work than it saves.

The Mental Model First

Cursor rebuilt its interface from scratch around a single premise: most code will be written by agents, and your job is to orchestrate, review, and merge — not write every line. The Agents Window is the control center for that workflow. Multiple agents, running in parallel, across local machines, cloud sandboxes, git worktrees, and remote SSH, all visible in a single sidebar.

That sounds appealing. Here’s the problem it creates: when agents run in parallel across repos, your review surface explodes. Before you spin up five agents, you need a clear answer to “who reviews what, and in what order?” Teams that design that review workflow upfront get real leverage from this. Teams that don’t ship subtle cross-agent regressions and spend a week untangling them.

The Agents Window does not solve the review problem. It makes the review problem more urgent.

Step 1: Activate the Agents Window

Open Cursor 3.0 and hit Cmd+Shift+P on macOS (or Ctrl+Shift+P on Windows/Linux), type “Agents Window,” and select it. That’s the entire activation sequence.

The Agents Window opens as a standalone workspace, separate from the VS Code-based IDE you’re used to. Your existing IDE session stays intact — you can switch between them at any time, or run both simultaneously. Nothing from your current session is lost when you open the Agents Window.

All agent sessions — local, cloud, remote SSH, and anything kicked off from mobile, web, Slack, GitHub, or Linear — appear in the sidebar. This is the consolidated view Cursor has been building toward.

To switch back to the traditional IDE without closing the Agents Window, use the Command Palette again (Cmd/Ctrl+Shift+P) and look for “Switch to IDE” or the similarly named option — the exact label may differ slightly between builds, so check your Command Palette if the first attempt doesn’t match. Both interfaces can be open at once. The IDE remains the right tool for tasks where you’re writing code directly; the Agents Window is for tasks you’re delegating.

Step 2: Configure Agent Scope Per Project

Before running any agents in parallel, you need to define what they know and what they’re allowed to touch. Cursor provides two configuration mechanisms stored in your project’s .cursor/ directory.

Rules are persistent instructions the agent sees at the start of every conversation. Keep them focused: the commands to run, the patterns to follow, and pointers to canonical examples in your codebase. Rules are always-on context — they’re not task instructions, they’re standing operating procedures. Store them in .cursor/rules/.

Skills extend what agents can do. Skills are defined in .cursor/skills/ as SKILL.md files and can package custom commands (triggered with / in the agent input), hooks that run before or after agent actions, and domain-specific knowledge the agent pulls in on demand.

For multi-repo setups, scope isolation is the critical configuration. Rules defined in repo-A/.cursor/rules/ apply only to agents working in repo A. If your microservices share a dependency — a shared types package, a common utilities library — that dependency’s rules should live in its own .cursor/rules/ directory, not duplicated across repos where parallel agents might receive conflicting instructions.

Step 3: Git Worktree Integration and the /worktree Command

This is the step most guides skip, and it’s the one that determines whether parallel agents help or hurt you.

Without worktrees, multiple agents working in the same repository edit the same branch. They step on each other’s changes, produce conflicts, and generate a debugging session that costs more time than the agents saved. This is not a theoretical risk — it happens immediately when you run two agents on overlapping file paths.

Cursor 3.0 added /worktree specifically to solve this. When you type /worktree in the agent input, Cursor creates a separate git worktree so that agent’s changes happen in complete isolation: separate files, separate index, separate branch history.

# Cursor creates the worktree for you when you use /worktree
# Under the hood, this is equivalent to:
git worktree add ../my-project-feat-auth -b feat-auth
git worktree add ../my-project-fix-perf -b fix-perf

Each agent gets its own branch. File edits and indexes are isolated, so agents can build and test without interference. Branch names map cleanly to tasks — feat-auth, fix-perf, refactor-api — which means your PR history stays readable.

The shared dependency problem: When your project requires node_modules (or any other installed dependencies), you must install them in each worktree before starting an agent. The worktree does not inherit node_modules from the main working tree.

# Install dependencies in each worktree before running agents
cd ../my-project-feat-auth && npm install
cd ../my-project-fix-perf && npm install
cd ../my-project-refactor-api && npm install

This is the single most common setup failure. Skip this step and your agents will either error immediately or install dependencies themselves in ways that conflict across worktrees.

The merge flow: Once an agent completes its work and CI passes, merge back to main:

git merge feat-auth
# Clean up the worktree after merging
git worktree remove ../my-project-feat-auth

Review the diff on the branch before merging, not after. Each worktree produces a clean, attributable diff — this is the review surface you’ll be working with.

Step 4: Running Agents in Parallel with Agent Tabs

Once your worktrees are configured, you can run multiple agent sessions simultaneously. Agent Tabs let you view multiple conversations at once — side-by-side, in a grid, or stacked. Each tab is an independent session with its own context, model selection, and execution environment.

For non-trivial tasks where model choice materially affects output quality, use /best-of-n. This command runs the same task in parallel across multiple models, each in its own isolated worktree, then surfaces results side-by-side for comparison. Cursor will suggest which solution it considers strongest, but the selection is yours. A refactoring prompt might produce cleaner code from one model and better test coverage from another — /best-of-n surfaces that trade-off directly instead of making you run sequential experiments.

Use /best-of-n specifically when you’re uncertain which model handles your task type better — not as a default for every task. The compute cost scales linearly with the number of models you run in parallel.

The default agent model in the Agents Window is Composer 2. For current benchmark scores and pricing, check Cursor’s official model documentation — these figures update with each release and any numbers published here would go stale quickly.

Step 5: Design Mode for UI Work

Activate Design Mode with Cmd+Shift+D on macOS or Ctrl+Shift+D on Windows/Linux. This toggles a browser overlay in the Agents Window where you can click on any UI element and annotate it directly — drawing on the preview to indicate layout changes, spacing adjustments, or visual issues. Annotations are passed to the agent as prompt context.

Design Mode is genuinely useful for one thing: eliminating the “describe the third button in the second card on the settings page” problem. Instead of verbose text descriptions of what to change, you click the element and annotate it. For single-component visual feedback loops — padding adjustments, color changes, layout tweaks — this is a real acceleration.

It is not useful for:

  • Multi-select flows: You cannot hold Shift and select multiple components simultaneously. Selection is also described as “finnicky” in practice — clicking adjacent elements often misfires.
  • Stateful interactions: If the UI element you need to annotate only appears after a sequence of user actions (login → dashboard → modal → nested form), Design Mode cannot navigate to it for you. Your dev server needs to be running and the state needs to be accessible directly.
  • Anything requiring assertions: Design Mode shows what the UI looks like; it cannot verify behavior, accessibility, or interaction correctness.

Before using Design Mode, your dev server must be running. Start your application locally (or connect via remote SSH), then toggle Design Mode to begin annotating. If the application state you need to annotate sits behind a multi-step flow, set that state up manually before activating Design Mode.

Step 6: Cloud Agents and the Local Handoff

Cloud agents run in Cursor’s managed sandboxes. When a cloud agent completes a task, it produces demos and screenshots of its work — this is the verification artifact. You review the demo, confirm the agent did what you asked, and decide whether to ship it, iterate on it, or pull it to local.

The handoff to local is designed to be fast. Pull a cloud session to local when you want to make edits, run tests yourself, or debug something the cloud agent got wrong. The session moves to local without losing context — the agent’s full conversation history and the state of its work transfer with it.

The practical workflow looks like this:

  1. Assign a well-scoped task to a cloud agent (multi-file refactor, new endpoint with tests, migration script)
  2. Walk away — cloud agents can run for hours
  3. Review the demo or screenshots when the agent flags completion
  4. If the output passes visual review, run CI
  5. If CI passes, merge the worktree branch
  6. If the output needs iteration, pull to local, make targeted adjustments, re-run

The key framing: cloud agents produce verification-enabling output, not production-ready output. A demo and a screenshot are evidence that the agent completed something — they are not a substitute for CI and code review.

When NOT to Use the Agents Window

Knowing when to stay in Composer is as important as knowing how to use the Agents Window.

Single-file tasks. The Agents Window adds orchestration overhead — worktree creation, session management, reviewing in tabs. For a change that touches one file, open Composer, make the edit, done. The overhead is not worth it.

Exploratory refactors with unclear scope. Agents optimize for known goals. If you’re in the middle of “I’m not sure how I want to restructure this auth system,” an agent will produce a confident, wrong answer faster than you can evaluate it. Work through the design yourself first. Give the agent a scoped task once you know what you want.

Projects without CI or test coverage. This one is non-negotiable. The agent “performs better when it has a compiler or test runner to tell it when it’s wrong.” Agents running in parallel, modifying code autonomously, with no external feedback signal about correctness, will produce code that compiles and fails at runtime in ways that are expensive to debug. Without that feedback loop, you’re reviewing blind. Don’t use parallel agents on codebases where you can’t verify correctness programmatically.

When review capacity is already saturated. Every agent you spawn produces a branch to review. If your team is already struggling to review PRs from human engineers, adding five more agent-generated PRs does not solve the problem. It amplifies it.

One senior developer’s assessment from the Every.to review published April 4: “3.0 feels unfinished — the harness is too aggressive, sessions break, and the interaction model is inconsistent. But they’re moving fast, and if they smooth out the rough edges, this could be a genuine alternative to CLIs for people who aren’t comfortable in a terminal.” That’s an accurate read of where this is right now. This is not a polished v1 — it’s a fast-moving v0 of a genuinely new interaction model. If session stability matters for your workflow, test it on a non-critical project before committing.

Design Your Review Workflow Before You Scale Agents

The Agents Window is the first IDE feature that takes multi-agent orchestration seriously at the UI layer — not as a plugin, not as a beta flag, but as the default interface going forward. For teams running more than two concurrent feature branches, it changes the daily workflow calculus. But the teams that get 10x leverage from this are not the ones who spin up the most agents. They’re the ones who figured out the review workflow first.

Before you run more than two parallel agents, answer these questions explicitly:

  • Who reviews which agent’s output?
  • What does a passing review gate look like? (CI green? Coverage threshold? Architecture sign-off?)
  • In what order do branches get merged, and who resolves cross-agent conflicts?
  • What happens when an agent’s branch conflicts with another agent’s merged changes?

These are not tool questions. They are process questions. The Agents Window surfaces them faster than any previous IDE, but it does not answer them for you.

Get the review workflow designed first. Then scale the agents.