Intent
Start with the actual outcome and the quality bar. The agent needs to know what kind of result would satisfy you.
Deep lesson
Turn AI-assisted work into a disciplined engineering practice.
Agentic engineering is the discipline of turning fuzzy human intent into scoped work packets an agent can execute, verify, and improve. It combines product judgment, software engineering, management, review, and taste.

Build your own agentic engineering playbook with standards, review loops, and reusable skills.
How it works
The best workflow is not “ask once and hope.” It is a repeatable loop: define the work, let the agent operate in a bounded area, demand proof, review the result, and turn lessons into reusable standards.
Start with the actual outcome and the quality bar. The agent needs to know what kind of result would satisfy you.
Mental model
Read these four ideas as the vocabulary for agentic engineering. They are the labels you should use when a video explains a tool, habit, or workflow.
Before pressing play, try to predict where each idea appears in the system. That makes the video active instead of passive.
After each video, rewrite one card in your own words. If you cannot simplify it, the concept is not yours yet.
A concrete description of desired behavior, constraints, files, and acceptance criteria.
Learning move: pause when this shows up, name it, then write the practical rule it implies.The smallest unit of work an agent can complete and verify without drifting.
Learning move: pause when this shows up, name it, then write the practical rule it implies.The non-obvious judgment layer: what good feels like for code, design, writing, and product.
Learning move: pause when this shows up, name it, then write the practical rule it implies.A structured pass for bugs, regressions, missing tests, weak assumptions, and product fit.
Learning move: pause when this shows up, name it, then write the practical rule it implies.Two-video prototype
Principled Agentic Engineer
This Coding Tool Kills AI Code Slop
Put it into practice
Use this when you want to practice managing an agent like a serious collaborator.
Act as my agentic engineering coach and implementation partner. I want to practice converting vague intent into a high-quality agent work packet. Use this vague request: "Make my learning app way better and more interactive." Your job: 1. Ask only the minimum clarifying questions needed. If you can make a reasonable assumption, make it. 2. Rewrite the request as a production-quality task packet. 3. Include: - target outcome - audience - files or surfaces likely involved - non-goals - design quality bar - acceptance criteria - verification steps - risks - what should be shown to the user when done 4. Then implement the smallest useful prototype that proves the task packet works. 5. Verify it with typecheck or browser checks. 6. Finish with a concise report: changed files, what to inspect, what still needs human taste review. The output should feel like disciplined engineering management, not a generic AI brainstorm.
Guided watch sequence
Frame the discipline.
Study how low-quality output happens.
Encode review judgment.
Deep read
You are managing a fast junior collaborator. The leverage comes from clear task packets, fast feedback, strong defaults, and refusing vague completion claims.
If you want excellent output, your standards must become prompts, examples, tests, screenshots, rubrics, and reusable skills. Taste that stays in your head cannot guide an agent.
Long autonomous runs feel magical but often hide drift. Strong workflows use small loops: inspect, plan, edit, verify, review, then continue.
Misconceptions
It means designing work so agents can do bounded pieces well.
Tests catch behavior. Review catches architecture, readability, maintainability, and product judgment.
Practice studio
Convert a vague feature idea into a scoped implementation brief.
Brief with files, constraints, acceptance tests, and risks.Write your own review checklist for AI-generated code or UI.
A reusable SKILL.md-style rubric.Take one agent output and identify where the prompt, context, or verification failed.
Three process improvements.Recall check
Source shelf
Use this to sharpen instructions, examples, constraints, and tool-use prompts.
platform.openai.com/docs/guides/prompt-engineeringOpen sourceDocsClaude Code overviewRead this to compare Codex-style workspace operation with Claude Code’s agentic coding model.
docs.anthropic.com/en/docs/claude-code/overviewOpen sourceReadingGoogle Engineering Practices: Code ReviewStrong baseline for turning human review taste into reusable agent review criteria.
google.github.io/eng-practices/review/Open sourcePodcastLenny’s Podcast: Head of Claude CodeA practical discussion of what changes when coding agents become central to engineering work.
www.lennysnewsletter.com/p/head-of-claude-code-what-happensOpen sourcePodcastNo Priors podcastGood strategy and builder-level context, including recent conversations around agentic engineering and AI-native products.
podcasts.apple.com/us/podcast/no-priors-artificial-intelligence-technology-startups/id1668002688Open sourcePodcastLatent Space: The AI Engineer PodcastBest recurring feed for AI engineering, agents, evals, codegen, and infrastructure.
www.latent.space/podcastOpen sourceWatch next
Use these after the first two videos. They broaden the idea without losing the thread: architecture, workflow, tooling, review, and operating discipline.
Expands the “taste must be operationalized” idea: your standards need to become instructions, rubrics, and review loops.
Gives the practical Codex control loop: inspect, plan, edit, verify, and report.
Shows how the surrounding toolchain changes what the agent can actually accomplish.
Useful for comparing what belongs in Codex, Claude, browser tools, project context, and external automation.