• Sumit Sute
  • about
  • Work
  • Bloq
  • Byte
  • Blip
Bengaluru, In
All Bloqs
xxx
Mar 12, 2026
11 min read
My AI is Smarter Than Me
Observations from my early experiments with AI agents: why I'm moving away from 'fix-it' commands and toward 'explain-this' conversations.
#ai
#reflections
#typescript
#frontend
#architecture

The Vending Machine Problem

"The fastest way to stop learning is to always get the right answer."

I have a confession. Somewhere in the last year, I became a very efficient vending machine operator. Put in a PR review comment, press the agent button, receive a neatly wrapped fix. Repeat until all comments are resolved. Mark as done. Ship.

The code was correct. The tests passed. The reviewer approved. And I, the supposed engineer behind all of it, had absorbed roughly the same amount of knowledge as the chair I was sitting on.

Here's the weird part: the agent was right there, ready to teach me everything I was skipping. I just never asked it to. I kept saying "fix this" when I could have been saying "explain this to me like I'm going to have to defend it in a meeting." Same tool. Completely different outcome.


The Three Stages of Agent-Assisted Denial

I've noticed a pattern in how I process code review comments when an agent is within arm's reach. It goes something like this:

Stage 1: Scanning. I read the comment. I feel a small pang. ("Oh, right. I should have caught that.") This stage lasts approximately 0.4 seconds.

Stage 2: Delegation. I paste the comment into the agent. ("Fix this.") The pang dissolves. Someone competent is handling it.

Stage 3: Approval. The agent produces a diff. I skim it. It looks right. It probably is right. I push. The pang is now a distant memory, like that gym membership I keep meaning to use.

The thing is, there's a Stage 2.5 that I kept skipping. The part where, after reading the comment, you ask the agent: "Wait, before you fix this, can you explain why the reviewer thinks this is wrong? What principle is this comment pointing at?"

That question changes everything. Instead of a diff, you get a reasoning chain. Instead of a fix, you get a lesson. And the agent is absurdly good at this. It can trace a reviewer's concern back to first principles, show you three real-world scenarios where ignoring that principle causes pain, and then fix the code. All in one go.

I just never asked.


What Reviewers Are Actually Selling

"A review comment is a gift wrapped in mild disapproval."

Not all review comments are created equal. Some are bugs. Some are formatting nitpicks. But the best ones, the ones that sting a little, are compressed experience. A senior engineer packing years of scar tissue into two sentences.

Here's one I keep running into. I'll add something helpful to a server response. A list of which items failed to process, a status field showing what the backend is doing, some internal detail that the frontend could theoretically display. Feels thoughtful, right? Proactive. The kind of thing you'd put on a "going above and beyond" self-review.

The reviewer says: "Remove this. Log it internally. Don't put it in the response."

The first time, I was confused. Why wouldn't we want more information? That's like a restaurant refusing to tell you what's happening in the kitchen. Transparency is good.

But here's the thing about a server response: every field in it is a promise. Not a promise in the JavaScript sense (though also that). A promise in the "other people's code will now depend on this existing forever" sense. The moment you add failureDetails to a response, some frontend component is going to read it, some test is going to assert on it, some new hire is going to build a dashboard around it. And now you can't remove it without breaking things.

That list of failed items? It's an internal hiccup. Today it happens because of a timing issue. Tomorrow you fix the timing issue and the list is always empty. But the field is still there, in your response shape, being dutifully tested and parsed and rendered by code that no longer needs it. You've turned a temporary problem into a permanent feature.

The reviewer knew this. Not because they're smarter, but because they've lived through the "helpful extra field" becoming the "field we can't remove without a migration." They've been burned. The comment was the burn, repackaged as two polite sentences.

Now here's what I've started doing: when I get a comment like this, I don't just fix it. I ask the agent to explain the consequences. "If I keep this field in the response, what happens in six months? What depends on it? What breaks when I try to remove it?" The agent lays out the dependency chain. The frontend types that reference it. The tests that assert on it. The documentation that mentions it. Suddenly the reviewer's two-sentence comment becomes a whole worldview I can see.

I've started keeping a list:

It's not fancy. It's barely a system. But it means the lesson lands in my head, not just in the diff.


The Missing Question

"The agent will give you exactly what you ask for. The tragedy is in what you don't ask."

Let me describe something embarrassing. I had a form component that fetched items by ID. One ID, one network request. Five IDs? Five requests, fired off in parallel. It worked. It was even kind of satisfying to watch in the Network tab, all those little waterfall bars.

The reviewer said: "Use a single list call with an ID filter."

The agent refactored it. Replaced five parallel fetches with one call that said "give me items where ID is in this list." Clean, fast, done.

A week later, a different component. Same pattern. Five IDs, five requests. I didn't even notice until the reviewer pointed it out. Again.

This is the part where past-me would have beaten himself up. But current-me did something different. I pasted both instances into the agent and asked: "Why is a single list call better than multiple parallel requests? Walk me through it."

And the agent did something beautiful. It didn't just say "fewer network calls." It explained that "give me these five items" is one thought. One intention. You're asking the server for a set, not for five individuals. When you model it as five separate requests, you're lying about what you actually want. And lies in code have consequences: you need error handling for each individual request, you need to stitch the results back together, you need to handle partial failures where three come back and two don't. But if it's one request for a set, the server handles the set logic. Your code just says what it means.

That explanation changed how I think about API calls in general. Not "how do I make this faster" but "what am I actually asking for?" Am I asking for a set of things, or am I asking for individual things one at a time? The answer changes the code shape.

I would never have gotten there by just accepting the fix. I got there by asking the agent to be a teacher instead of a typist.


Scope Discipline, or: Why My PRs Look Like Moving Day

I have a bad habit. When I'm deep in a file fixing one thing, I notice three other things that could be better. A variable name that bugs me. Some formatting that's inconsistent. A piece of logic that could be slightly more elegant. And because I'm already here, why not just clean it up?

This is how a "support multiple IDs instead of one" change turns into a PR that also refactors error handling, restructures some unrelated data shapes, and renames variables in a function you weren't even supposed to be touching. It's like going to the store for milk and coming home with a new lamp, a yoga mat, and a strong opinion about kombucha.

Reviewers notice. Not because any individual change is bad, but because they can't tell what's the point and what's the detour. They're reading your diff like a mystery novel, trying to figure out the plot, and you've thrown in three subplots and a flashback.

I started asking the agent to help me with this before I push. "Look at my diff. Which changes are related to the main feature and which are drive-by cleanups?" The agent highlights the unrelated bits. Every time, I'm a little surprised by how many there are. It turns out "while I'm here" is the most dangerous phrase in software engineering, right after "it works on my machine."

Now I have a test:

The agent is genuinely helpful here. Not as a fixer, but as a mirror. It shows you what your diff actually looks like from the outside. Turns out, it looks like moving day.


Duplication Is a Postcard from Your Future Self

"Copy-paste is how code asks for help."

Here's a pattern that keeps teaching me things. You have two components that do similar work. Over time, they each grow their own version of the same logic: clean up an ID string, remove duplicates from a list, fetch some data, handle the case where some items don't exist anymore. The implementations are almost identical but not quite. One removes bad IDs and warns the user. The other just quietly pretends everything is fine.

The agent is excellent at spotting this. Ask it to compare two files and it'll lay out a side-by-side of every duplicated pattern, every subtle divergence, every place where one component handles an edge case the other doesn't. It's like having a very patient detective who reads code instead of crime scenes.

But the real lesson isn't in the extraction. It's in the question: why did this duplication exist in the first place?

It existed because I built the second component without thinking about what it had in common with the first. I just started coding, and naturally reinvented the same patterns, slightly worse. If I'd paused and asked, "does this component need the same data flow as the other one?", I would have designed the shared behavior first, as a deliberate thing with a name and a shape.

Now when I'm about to build something that smells similar to something else, I ask the agent: "Here's component A. I'm about to build component B, which does something similar. What behavior should be shared? What should be different? Help me design the common contract before I start."

The agent comes back with three bullet points:

  • How both components fetch data (one list call, filter by IDs)
  • How both detect missing items (compare what you asked for vs. what came back)
  • How both surface failures to the user (remove the bad IDs, show a warning)

Three bullet points. Fifteen seconds to read. And now the implementation is just filling in the blanks instead of reinventing the wheel. Slightly worse. Again.


Draw the State Machine, You Coward

"If you can't draw it, you don't understand it."

I have a theory: every bug in a complex form component exists because someone didn't draw the state machine. I'm someone. I didn't draw it. There were bugs.

Here's what I mean. You have a form that loads data based on some IDs the user selected. Sounds simple. But then: what if the user changes their selection while you're still loading the previous one? What if one of the items they selected has been deleted? What if you need to refresh one item without refetching everything?

Each of these is a state your component can be in. And each has transitions: things that move you from one state to another. If you don't map them out, you'll handle the happy path and miss everything else. The "I'm still loading when new IDs come in" case. The "three out of five items came back, what about the other two" case.

This took me five minutes to draw after the fact. After the agent had already fixed all the bugs. After the reviewer had already pointed out the missing abort handling and the inconsistent error paths.

Here's the move I'm making now: before I write any component with async data and multiple possible states, I ask the agent to help me sketch the state machine. "Here's what this component needs to do. What are the states? What are the transitions? What happens when two things change at the same time?"

The agent draws the diagram. I read it. I argue with it. ("No, I think 'partial failure' should auto-recover, not just warn.") And by the time I start coding, the state machine is in my head, not in the agent's context window. The code I write is an implementation of something I understand, not a series of patches I accepted.

The difference matters. Next time I build a similar component, I'll know to draw the diagram first. Not because I memorized a rule, but because I remember how it felt to see all the states laid out, and how obvious the missing transitions were. That's the kind of knowledge that sticks.


The Nourishment Problem

"You can eat without tasting. You can code without learning. Both leave you hungry."

There's a word I keep coming back to: nourishment. Not productivity. Not velocity. Nourishment.

An agent makes you productive. It resolves comments, writes tests, extracts utilities. Your throughput goes up. Your PR turnaround time drops. On every metric that matters to a sprint board, you look amazing. You are, by all visible measures, a machine.

Except you're not a machine. You're an engineer who wants to get better. And "better" doesn't mean "faster at accepting diffs." It means developing the instinct that prevents the review comment in the first place. The sense that says "this response shape is going to cause problems in six months" before anyone has to tell you. The reflex that designs the shared utility before building the second component that needs it.

That instinct, that sense, that reflex, none of it comes from watching an agent work. It comes from struggling with a problem yourself, making the wrong choice, understanding why it was wrong, and carrying that understanding into the next decision. The struggle is the nutrition. The agent's perfectly formatted diff is the equivalent of watching a cooking show and calling it dinner.

But, and this is the part I got wrong for months, the agent can also be the one who helps you struggle productively. It's not just a diff machine. It's a reasoning engine. You can ask it to explain why your approach was wrong. You can ask it to show you what would break in three months. You can ask it to walk you through the trade-offs between two designs and let you pick. You can even ask it to quiz you: "Here's a similar scenario. What would you do?"

The difference between an agent that makes you productive and an agent that makes you better is exactly one word: why.

"Fix this" makes you productive.

"Why is this wrong, and what principle should I remember?" makes you better.


A Slower, Better, Still-Very-Much-Agent-Assisted Loop

I'm not quitting agents. That would be like quitting spell-check because you want to "really learn" spelling. Romantic, but stupid. The agent stays. I'm just changing what I ask it to do.

Here's my new loop when I get a review comment:

First, I ask the agent to explain the comment. Not fix it. Explain it. "What is this reviewer actually concerned about? What principle is behind this comment? Give me a real-world scenario where ignoring this advice causes pain." This takes thirty seconds to read and is worth more than the fix.

Then, I try to predict the fix. Before the agent writes any code, I describe in plain English what I think the solution should be. Sometimes I'm right. Sometimes I'm hilariously wrong. Either way, the act of predicting forces me to engage with the problem instead of outsourcing it.

Then, I ask the agent to implement it and explain the choices. "Fix this, and walk me through why you structured the solution this way." Now I'm reading the diff with context. I understand why the utility was extracted, why the error handling is symmetric, why the fetch pattern changed. The diff is a lesson, not a mystery.

Finally, I write one sentence. The thing I learned, stated simply enough that I'll remember it next time. "Server responses are promises. Don't put temporary problems in a permanent promise." That sentence, ridiculous as it sounds, is the whole point. The code fix solves today's PR. The sentence solves every future PR where the same principle applies.

The loop is slightly slower. Maybe five extra minutes per comment. But five minutes of understanding beats five seconds of delegation every time. And the agent is doing most of the heavy lifting, it's just lifting knowledge into my head instead of code into the repo.


What I'm Practicing

I won't call these takeaways. They're more like stretches. Things I'm doing more of, not things I've mastered.

  • Ask the agent "why" before "fix." The explanation is worth more than the diff. Every review comment points at a principle. Ask the agent to name it, explain it, and show you what happens when you ignore it. Then fix it.

  • Scope is a kindness. One idea per PR. If you need the word "also" to describe it, split it. Ask the agent to flag unrelated changes in your diff before you push. Your reviewer's attention is a gift. Don't bury it under your side quests.

  • Responses are promises. Every field you put in a server response is a field someone else's code will depend on. It becomes a permanent feature of the system, not just a nice-to-have. If it's a temporary operational detail, keep it internal. Log it. Alert on it. Don't make it part of the shape other code relies on.

  • Duplication is a missing design. When two components independently invent the same behavior, the problem isn't the code, it's the missing shared contract you didn't design first. Ask the agent to compare the two files. Design the shared behavior in three bullet points. Then build.

  • Draw the state machine. If your component has async data, multiple failure modes, and things that can interrupt other things, it's a state machine whether you drew it or not. Ask the agent to help you sketch it. Argue with the diagram. The transitions you catch on paper are the bugs you don't ship.

  • Use the agent as a tutor, not just a typist. Ask it to explain trade-offs. Ask it to predict consequences. Ask it to quiz you. The gap between "the agent knows why" and "I know why" is the gap between being productive and being capable. Close it by asking better questions, not fewer.

I'm not there yet. Last week I caught myself pasting a review comment into the agent before I'd even finished reading it. Old habits die hard. But then I stopped, backed up, and typed "explain this to me first." And the agent, patient as ever, did. And I learned something I would have missed.

That's the whole trick, really. Same tool. Better questions. Slightly less comfortable. Significantly more nourishing.

Mar 9, 2026
8 min read
xxx
Vibing Your Way Into a Wall
I told an agent to make a card orange. Two days and six files later, I learned that vibing is not engineering, agents don't question bad specs, and curiosity beats confidence every time.
Mar 4, 2026
7 min read
xxx
I Was Productive Before I Was Competent
I could navigate a codebase for months without being able to explain it. So I pointed an AI at the code and asked it to map the system. What I learned wasn't in the output. It was in the reading.