I wanted a way to introduce type-safe, tightly coupled model definitions to simple deterministic runtime predictions.
That’s why I built this. You define predictive models directly in TypeScript, compile them to immutable artifacts at build time, and run 100% in-process.
The logic: Synchronization
Think about how you define a database with an ORM. You define a schema, run a generator, and you have a type-safe client that stays in sync with your data structures. If something changes, the build fails, you fix it and everything is right.
I wanted the same workflow: a single command that produces an artifact, and a runtime that’s just a function call. No sidecars, no Python in production, no HTTP latency. Just simple, deterministic (and type-safe) predictions.
What PrisML does
You define a model in TypeScript, directly referencing your existing types — not a Python dict, not a YAML config:
import { defineModel } from '@vncsleal/prisml';
export const churnModel = defineModel<User>({
name: 'UserChurn',
modelName: 'User',
output: {
field: 'churned',
taskType: 'binary_classification',
resolver: (user) => user.churned,
},
features: {
accountAgeDays: (u) => Math.floor((Date.now() - u.createdAt.getTime()) / 86400000),
isPremium: (u) => u.plan === 'premium',
signupSource: (u) => u.signupSource,
},
algorithm: { name: 'forest', version: '1.0.0' },
qualityGates: [
{ metric: 'f1', threshold: 0.75, comparison: 'gte' },
],
});
defineModel<User> binds to your Prisma-generated User type. Every feature extractor is typed. Rename a field and TypeScript tells you in the definition — before you ever run training.
You train at build time:
npx prisml train --config ./prisml.config.ts --schema ./prisma/schema.prisma
That produces two files — UserChurn.onnx and UserChurn.metadata.json.
At runtime, prediction is an in-process function call:
import { PredictionSession } from '@vncsleal/prisml';
const session = new PredictionSession();
await session.initializeModel(
'./models/UserChurn.metadata.json',
'./models/UserChurn.onnx',
currentSchemaHash
);
const result = await session.predict('UserChurn', user, resolvers);
// result.prediction → 'churned' | 'retained'
// result.probability → 0.84
Python is only for training. The runtime is pure Node.
Schema drift protection
The artifact stores a SHA256 hash of your Prisma schema at training time. When the app loads the model, that hash is checked before any inference runs. If the schema changed without retraining:
SchemaDriftError: Schema hash mismatch for model 'UserChurn'.
compiled: a3f9c2b1...
runtime: d74e81a0...
Run 'prisml train' to recompile.
This was my solution for a model that silently degrades without anyone noticing because a feature was renamed days ago.
Design choices and tradeoffs
Large-scale ML usually needs complex pipelines. But for most of the product needs I see (churn risk, LTV estimates, recommendation ranks), that overhead ends up stalling the implementation.
PrisML was designed for this. It’s for those who want predictive power running inside their Node app without the maintenance tax of a separate ML stack. If you want a reliable, schema-safe model that’s easy to build, it’s worth a try.
Website: getprisml.vercel.app
Github: github.com/vncsleal/prisml