I’ve always wanted to understand systems down to their core, every abstraction peeled back, every detail understood from first principles. And then build something elegant from that understanding.

The scholar within me has always fought the builder. One wants to consider every tradeoff, the other wants to ship and see users get real value. Move fast, or understand deeply. It felt like a choice. Today, this is no longer true.

Last month, I learned an entire payments system (stripe payments + webhooks + entitlements) from the ground up and shipped a working MVP, in my vision, in a single day. Not by skipping the details, but by actually learning them.

The Catalyst

You wake up in a jail cell abroad. You’re only allowed to call one person you know to get you out of there. Who do you call?

Who do you think of? This is the person who has high agency.

I didn’t start developing agency until college. Before that, I was just a passenger. The problem is knowing you can act and actually following through are different skills. I’ve left a trail of half-finished projects behind me to prove it.

AI Agents on my computer have shortened the path from thought -> action. It doesn’t give you agency, but it does remove friction. Want to build a rocket? The LLMs will start you on the Tsiolkovsky rocket equation and build from there.

Never in my life have I ever felt more powerful, more in control of my future. The only barriers to the future of my choosing are time and making the right decisions.

The Compass

At times, software engineering is long hours of debugging a seemingly hopeless issue, alone. Touching many domains at once can feel like being lost at sea.

A good AI is a tireless compass. It can read hundreds of pages of docs, scan thousands of lines of code, and help you form a plan that’s actually executable.

But you must use it actively! Explain your understanding to the AI, attempt to teach it, and let it correct you. You’ll find the gaps in your knowledge faster than you ever would alone.

The Mirror

AI mirrors your standards.

If you’re unsatisfied with only surface level understanding, AI is a great teacher. Accept slop, and slop is what you’ll build.

If you sit with your own thoughts for long enough, you will understand yourself better. But even knowing your own flaws doesn’t make you immune to making them again, it simply makes the bad habits easier to fix. LLMs are a great thinking partner to help catch unwanted behaviors in your thinking, such as over-engineering, feature creep, and lack of critical thinking.

State your communication style to the AI clearly, for example, this is within my AGENTS.md:

- When explaining complex topics, it's vital that you start with first principles thinking.
- Challenge my thinking critically, I care about my critical thinking and I'd like you to challenge me and teach me.
- Be blunt in your responses, be realistic, don't sugar coat things, don't be overly agreeable. If something is dumb just say its dumb.

So, how do you actually use this?

  • Throw an entire codebase at the Agent, have it quiz you, break down and understand everything from first principles and rebuild it.
  • If you dislike an action done by your agent or yourself, update the global or project level AGENTS.md. Your AI will grow alongside you.
  • Constantly work with your AI to upgrade itself. When things go wrong, ask it to review your chat logs. Let it write documentation on things you’ve learned or decisions you’ve made.
  • Argue with the AI, have the AI push back.

My journey to get the scholar and the builder to work together continues, but I feel real progress.

If this has helped you or you’d like to get in touch: [email protected]

My coding agent config (Opencode).