CARLOS O. HUNTER

Founder @ Ontologic Labs

Building Ontologic Intelligence — decision-grade
Artificial Superintelligence
scroll down

About Myself

I'm Carlos O. Hunter. I'm a physicist (in)by training, research engineer & founder @ Ontologic Labs. Side ventures appearing soon. My work is driven by a fascination in how observers, time, and histories constrain what is possible.

I ponder on Artificial Superintelligence, Time & Causality and Observers & Memory. I care about first principles, and I'll start putting out work into the world this year — subscribe to the Substack if you want to follow along.

Ontologic Intelligence is the core of what I'm building. A fundamental shift from pattern matching to causal modeling, building the reasoning layer for decision-grade AI in high-stakes environments.

I'm looking for companions: investors, researchers, and builders who value clarity and ownership.

If you're interested, email me.

2026: THE YEAR WE DID IT ANYWAY*Me and the people trapped in my computer

Active Research & Development

Causal Intelligence

Causal models that move beyond pattern matching—memory architectures and cycle-aware reasoning for inference under structural ambiguity. Building the reasoning layer for decision-grade AI where decisions are auditable, uncertainty is first-class, and "what if?" questions have real answers.

Engineering Value

Engineering that uses LLMs for exponential productivity—turning capability into leverage, and leverage into shipped commercial solutions that scale. From voice interfaces to scientific writing tools (coming soon).

Observers & Memory

Studying how observers, memory, and temporal ordering constrain fundamental theories across quantum foundations, neuroscience, and cognitive science. Investigating how the consistency of records defines what can be computed, predicted, and ultimately known.

Ontologic Labs

For Capital

I'm building the reasoning infrastructure for high-stakes decision making. The market for auditable, uncertainty-aware AI in finance, defense, healthcare, and autonomous systems is massive and largely unaddressed by current foundation models.

For Researchers and Builders

Epistemic deferral and interpretation-conditioned probability are first-class objects here. I invite researchers and engineers working on the causal backbone of AGI and the foundations of computation and cognition, as well as domain experts in fields where causal reasoning can drive breakthroughs to collaborate. Hiring soon — register your interest (general applications welcome).

For The People

Building reasoning systems that are transparent and auditable, and ensure robots don't take over. When AI makes high-stakes decisions, people deserve transparency. Crowdfunding opportunities will be available. Subscribe to stay in the loop.

"The laws of thought, in all its processes of conception and of reasoning, in all those operations of which language is the expression or the instrument, are of the same kind as are the laws of the acknowledged processes of Mathematics."

— George Boole