Why We Use Behavioral Data Science, Not LLMs

Everyone else is prompting their way into automating their pipelines. We're doing something harder, and more honest.

When people hear we're building a behavioral prediction platform, the first question is usually some version of: "So you use AI?" Yes. But not the type you're probably thinking.

We use behavioral data science to cluster and predict outcomes for founders and candidates — not large language models. The follow-up is always: why? Why not just spin up an AI agent that answers phone calls and interviews people?

Because we care about doing a good job. And doing a good job means accurate predictions.

The right tool for the job

If you've tried out an LLM mimicking someone you know, you've probably noticed: sometimes impressive, sometimes just flat out wrong. When it comes to someone's career or a founder's shot at capital, "good enough" doesn't cut it.

There's also a deeper problem. Candidates are incentivized to perform — to say what interviewers want to hear. That's why so many YC founders pivot the minute they get in: they said whatever it took to get through the door. LLMs are trained to process language. They're extraordinary at it. But it's also their weakness. A well-coached answer or a lie looks great in text. As time goes on and incentives warp, we believe that VCs and the tech industry will experience more and more fraud.

The problem is that most of communication isn't verbal. It's behavioral. A slight condescension in tone can flip the meaning of a sentence. So if you're using LLMs, what about the signal you're missing?

I have a background in anti-fraud. Before LLMs became the default answer to everything, behavioral AI was already solving hard problems quietly. Anti-fraud models predicted which users on a platform were fraudsters with over 90% accuracy, even when supplied with deliberately false information. The question was never, purely, what did they say? — as the fraudster attempts to persuade you to reinstate their access. It was how do they behave? Those two things are very different data sources.

Why graph theory

In the era of AI, an estimated 50% of candidates are using LLMs to fabricate resumes or cheat on interviews. So how do you know who you're actually getting?

You wouldn't use an LLM to optimize a car engine — you'd use domain-specific engineering. Behavioral prediction is the same category of problem. We're building a recommendation engine for people: surfacing which individuals carry the behavioral signals closest to the profile each client has historically rewarded.

A person is not a data point. They're a dynamic system of behaviors over time. Graph theory lets us encode not just individual attributes but relational patterns — how clusters of behavior form, how those clusters map to outcomes, who someone actually is versus who they say they are on a resume.

Domain knowledge as a competitive moat

LLMs are generalists. Human judgment — in talent, in investment — is domain-specific. A good engineer might not make a great salesperson. We encode that knowledge deliberately rather than hoping the model figures it out. Every variable earns its place, every prediction is traceable, and the model sharpens with targeted feedback.

And because we work from behavioral signals, not resumes or self-reported demographics, we don't use race, gender, or age. Excellence surfaces on its own terms.

Crunchbase tells you what founders have done. We tell you who they are.