"I have only made this letter longer because I have not had the time to make it shorter." Blaise Pascal (background on the quote)
One thing I increasingly notice in day-to-day engineering work: the cost of creating code has collapsed faster than the cost of reviewing it.
The result is that PRs are getting enormous.
Not necessarily because engineers suddenly became careless, but because generating 1,500 lines of code across 20 files is now often easier than spending an extra hour reducing the problem to a clean 150-line change. Recently I saw a relatively small UI component copied between projects. The original implementation was roughly ~150 lines. The replacement PR ballooned into something closer to 1,200-1,600 lines. It recreated everything from the custom ui library instead of referencing it. It technically worked. It also dramatically increased the surface area of the system.
I do not think this is exclusively a junior engineer problem either. I increasingly see senior engineers submit PRs that span multiple responsibilities, architectural decisions, refactors, styling changes, and generated boilerplate all at once. The limiting factor is not engineering speed, rather it is reviewer attention.
Reviewers adapt accordingly: rubber stamping more.
In my opinion, this is a engineering culture & fundamentals 'challenge', and can be addressed as such. Ensuring PR's are actually tested before submitted, ensuring the initial cognitive load lies with the submitter not the reviewer, and being clear in ways-of-working in terms of suggested PR scope, size, and clear 'code ownership'. Even though Copilot might automatically insert itself as a co-author ref, it does not mean it should share responsibility.
A counterargument
Countering myself: there is an interesting split emerging in industry conversations around this. The recent "vibe engineering" / "agentic engineering" discussions frame AI-assisted development as something disciplined and production-grade. I agree if you approach it well. Simon Willison's recent writing on Vibe Engineering captures this particular distinction.
At the same time, I recently have spoken with multiple VP Engineering+ level folks and above, across various industries, whose counterargument is blunt: if the output works and the business moves faster, then maybe this is simply the new optimal point on the quality curve. I agree: code is cheap. If we control in and out, we'll be fine, or we can cheaply replace it. Hopefully.
Uncomfortably, large, mediocre, AI-generated codebases may actually be economically optimal for many companies and projects.
As an engineer, that somewhat makes me uneasy. At the same time, as someone who understands (and deals with) delivery pressure, cost, and incentives, it is the way forward, so long as the impact of failure is clear and contained.
... relevant, and from another time:
Comments
No comments yet.
Stored via Netlify Functions & Blobs. Do not include sensitive info.