AI Systems Researcher • Vision–Language Models • Explainable AI • Domain-Specific NLP
Website • GitHub • Hugging Face • ORCID • LinkedIn
I work on multi-modal AI systems with an emphasis on vision–language models, explainability, and deployable AI architectures.
My work explores how small language models and structured pipelines can be combined to build interpretable, domain-specific AI systems for real-world use.
Current directions include:
- Multi-modal reasoning systems (vision + language)
- Explainable AI for high-stakes domains (healthcare, governance)
- Retrieval-augmented and hybrid AI architectures
- Offline and resource-constrained LLM systems
- Designed a ViT + BioGPT pipeline for radiology report generation
- Built structured generation for clinical findings and impressions
- Integrated attention and gradient-based explainability for traceable outputs
- Developed a modular inference system and deployed on Hugging Face
- Developed a retrieval-based NLP system for querying constitutional documents
- Implemented semantic search pipelines for structured legal access
- Achieved 24K+ downloads, indicating real-world adoption
-
Explainable SLM-Guided Vision–Language Model for Skin Lesion Recognition
ICICC 2026, Springer LNNS -
NariRaksha: Gender-Responsive AI for Women’s Safety
India AI Impact Summit (MeitY + UN Women)
Core Areas
Multi-Modal AI • LLM Systems • Explainable AI • Retrieval-Augmented Generation
Frameworks & Tools
PyTorch • TensorFlow • Hugging Face • Transformers • LangChain • LlamaIndex
Systems & Deployment
Model inference pipelines • Hugging Face Spaces • Modular AI system design
Data & Processing
NumPy • Pandas • OpenCV
- Email: vikhrams@saveetha.ac.in
- Open to research collaborations and applied AI system development
