Some problems don't announce themselves. They accumulate.
In 2024, AI agents started doing real work — analyzing documents, making recommendations, triggering actions. And they were remarkably good at it. Until someone asked: why did it do that?
The answer, almost always, was a shrug dressed up as a log file.
Semantiv exists because that answer isn't good enough. Not for financial regulators. Not for intelligence analysts. Not for anyone building systems where trust matters.
CEO & Founder
MS Scientific Computing (Florida State), NLP Research (Carnegie Mellon). Twelve years building production systems at scale — recommendation engines at Outbrain, VR infrastructure at Nokia, analytical tools at McKinsey via ThoughtWorks, music intelligence at Spotify, and risk analytics as Engineering Director at Moody's Analytics.
The thread that connects all of it: making complex, noisy, real-world information into something structured enough to act on and transparent enough to trust.
The dominant approach to AI agents today is imperative: prompt, respond, parse, pass along. When it works, it's magic. When it fails, nobody knows why.
This isn't a model problem. It's an architecture problem.
Semantiv's thesis: when an agent returns a typed program instead of a string — a structured computation graph with explicit dependencies and provenance — everything changes. You can inspect it before it runs. You can compose through typed contracts. You can cache, diff, and replay. You can explain to a regulator exactly what happened and why.
The structure doesn't hallucinate. The types don't lie.
The word semantic comes from the Greek sēmantikos — "significant, having meaning." In computer science, semantics is what programs mean, as distinct from what they look like or how they run.
Semantiv takes that idea literally. An AI agent's output should carry its meaning with it — not as a side note, but as structure.