It started with a curiosity about how machines learn, and turned into a deep obsession with building AI systems that actually work in production. During my four years at Dhirubhai Ambani University, I went from studying machine learning fundamentals to leading the AI Club and teaching the ML course as an Undergraduate Teaching Assistant. Along the way, I realized that the most interesting problems were not in the classroom but at the intersection of research and real-world deployment.
That curiosity took me to Substance AI, where I built a LangGraph multi-agent underwriting system from the ground up that cut processing time from 5 days to under 5 minutes. Then at Binocs Labs, I went deeper into the internals of LLM systems, designing production RAG pipelines and agentic workflows. When I hit a wall debugging LLM traces, I did not just fix it for myself. I contributed the fix to Arize Phoenix (9.5k+ stars) and OpenInference (950+ stars), shipping support for OpenAI Responses API, Anthropic streaming instrumentation, and end-to-end OpenTelemetry tracing.
On the research side, I co-authored a paper under review at ACL 2026 on evaluating whether LLMs can truly generate epistemically meaningful hypotheses, not just fluent ones.
I am now looking for full-time AI Engineer roles where I can keep building systems that are not just intelligent but observable, reliable, and impactful at scale. If that resonates with you, let us connect.


