Engineering Backpressure: Keeping AI-Generated Code Honest Across 10 SvelteKit Repos
I manage about ten SvelteKit repositories deployed on Cloudflare Workers, and leveraged Anthropic's Claude Code to do it. Generally speaking, AI coding assistance can be fast and capable, especiall...

Source: DEV Community
I manage about ten SvelteKit repositories deployed on Cloudflare Workers, and leveraged Anthropic's Claude Code to do it. Generally speaking, AI coding assistance can be fast and capable, especially if you already know how to code, but precisely because they are so fast, they can be — if you're not careful — consistently wrong in ways that are hard to spot. Not wrong as in "the code doesn't work." Wrong as in: it uses .parse() instead of .safeParse(), it interpolates variables into D1 SQL strings instead of using .bind(), it fires off database mutations without checking the result, it nests four levels of async logic inside a load function that should have been split into helpers. The code works. It passes TypeScript. The problem is that if you add guidance to your CLAUDE.md file (or other coding agents' guide files) such as "always use safeParse()" and "never interpolate SQL", those are just suggestions, not constraints. The AI reads them, and might follow them, but also it might not.