top of page
Search

First Real Dive into Cursor Sub-Agents — and I’m Genuinely Impressed

  • Shreyas Dhond
  • Feb 8
  • 2 min read

I recently moved beyond casual experimentation with Cursor and spent time seriously testing its sub-agent capabilities. Rather than prompting it incrementally or steering every decision, I wanted to understand how it performs when given ownership over an end-to-end outcome.

So I designed a simple but revealing experiment.


The Setup

The task was intentionally scoped but non-trivial:

Build a multi-player Tic-Tac-Toe game inside a Salesforce org using Lightning Web Components (LWC), such that multiple users can play the same game in real time.

This requires coordination across:

  • Data modeling

  • Backend logic

  • UI state management

  • Security and permissions

  • Deployment and runtime setup


In other words, a small system — not just a component.

I handed Cursor the goal and stepped back.


The Result: A Complete System in ~20 Minutes


Within roughly 20 minutes, Cursor independently built a fully working, end-to-end solution from scratch.


It wasn’t just writing code — it was making architectural decisions and debugging issues with deployment.


What It Implemented

  • A custom Salesforce object to persist shared game state across users

  • An Apex controller that:

    • Validates moves server-side

    • Enforces turn order

    • Detects win and draw conditions

  • A Lightning Web Component UI that:

    • Creates and joins games

    • Renders the board dynamically

    • Syncs state across users using polling

  • A permission set to manage access cleanly

  • Automated:

    • Deployment

    • Permission assignment

    • Lightning page creation

    • Custom tab setup


All of this was done through natural language prompts — without explicitly defining the architecture upfront.




Artifacts





Why This Matters More Than Speed

The most interesting part wasn’t how fast this happened. It was how independently it happened.


Cursor didn’t behave like a smart autocomplete or code assist engine. It behaved more like a junior-to-mid-level engineer who:


  • Understands platform conventions

  • Anticipates application complexity

  • Makes reasonable tradeoffs (polling vs real-time events)

  • Owns the full lifecycle of a feature


That distinction matters.


Sub-Agents Change the Unit of Work


What this experiment clarified for me is that sub-agents change the unit of delegation.

Instead of:

“Help me write this function”

You can now say:

“Build this feature using team of agents”

That shift has real implications:

  • Faster iteration on ideas

  • Less cognitive load switching between layers

  • More time spent defining what to build instead of how


The human role moves up the abstraction stack — toward intent, constraints, and judgment.


A Glimpse of What’s Coming

This was a deliberately small experiment, but it points toward a much larger shift in how software gets built.


As agents become more reliable at owning bounded problems end to end, the leverage comes from:


  • Framing problems well

  • Defining clear outcomes

  • Knowing when to trust and when to intervene


I’m still early in exploring Cursor’s full capabilities, but this was one of those moments that forces you to rethink your workflow — and your role in it.

More experiments to come.

 
 
 

Comments


Copyright © 2024 SFDCShred

bottom of page