Research
Science for AI
Performance- and hardware-efficient AI, building tools that make models faster, leaner, and more reliable on real-world compute.
Focuses on the interface between algorithms and hardware: profiling, evaluation, and optimization workflows that surface bottlenecks, guide model and runtime choices, and make advanced AI systems practical on constrained devices.
Research notes
Science for AII am building physics-style evaluation harnesses for AI reliability: invariance and conservation probes, causal diagnostics, and stress traces that surface failure modes before deployment. The work is currently pre-publication while I harden the notebooks and benchmarks.
The aim is to bring lab-grade measurement rigor into AI tooling and publish the first science-for-AI notes and benchmark slices in 2025.
- Built internal prototypes for invariance- and conservation-based evaluations (pre-publication).
- Defined a release plan for sharable stress-harness notebooks and datasets.
- Aligned metrics with physical priors to interpret model behavior and uncertainty.
- Publishing the first science-for-AI research notes and benchmark slices in 2025.
- Expanding causal probes and interpretability tooling with collaborators.
- Hardening the evaluation stack against new frontier-model failure modes.