I shipped a new version of PrisML this week. There’s nothing flashy about it. But that’s exactly the point.
0.3.1 was a trust release. Earlier validation in training, better alignment between prisml check and the runtime, documentation that actually matches what the code does. Things that don’t show up in a demo, but that matter once someone starts depending on the package for real.
The process was this: before touching any code, I went back to the repository and checked three things — what was implemented, what the roadmap said was coming next, and what the documentation was publicly promising. That alignment sounds obvious until you notice how easy it is to grow the gap between what the code does and the docs you wrote about it.
In PrisML’s case, the next logical step was not to add new features. It was to close contract gaps that already existed. So the scope stayed small on purpose: one branch, one PR, one set of tests for the new behavior, and only then the version bump.
And yes. I used AI agents in the process. To read the repo, compare implementation against the roadmap, identify the next step, and implement that one scoped step. It worked well. What doesn’t work is letting the agent invent product direction mid-flight or pile workarounds on top of a configuration problem.
I got a concrete reminder of that during the Vercel part: the problem was project configuration, not code. The right move was to go back to the stable commit and manually redeploy from a known-good state — not code around the problem.
AI accelerates. But it only works well when you know what you’re asking it to accelerate.
0.3.1 wasn’t hard to ship. It was disciplined. And that difference is what will make 0.4.0 easier.