Computer Science graduate building production-grade ML systems, RAG pipelines, and agentic AI applications. I turn complex data and AI capabilities into reliable, measurable solutions.
I'm a Computer Science graduate from VTU specializing in Machine Learning and Generative AI, building end-to-end AI systems — from raw data ingestion to deployed, real-time applications that solve real business problems.
My experience spans predictive machine learning models, deep learning pipelines with PyTorch, RAG-based knowledge systems using LangChain and ChromaDB, and agentic AI systems that orchestrate multi-step workflows through natural language and tool execution.
I focus on measurable impact — every project I build includes concrete performance improvements, such as improving model accuracy from 73% to 98%, reducing data preparation time by 60%, and lowering response latency by up to 40%. My goal is always reliable, production-ready AI systems.
Currently focused on advancing skills in Agentic AI architectures, Generative AI systems, and building scalable end-to-end AI applications, while actively seeking full-time Machine Learning Engineer or Generative AI Engineer roles.
Building a production-ready Agentic AI HR Assistant designed to automate end-to-end HR operations using natural language. The system enables users to interact conversationally with an AI agent that autonomously executes tasks such as employee onboarding, leave management, meeting scheduling, and ticket resolution, reducing manual HR workload and improving operational efficiency. The assistant leverages Agentic AI architecture, where the LLM intelligently selects and invokes MCP tools to perform multi-step workflows. These tools connect to modular service managers responsible for handling core HR functionalities such as employee records, leave processing, meeting coordination, and ticket tracking. This design allows the AI agent to orchestrate complex business workflows dynamically instead of relying on static rule-based automation. To ensure scalability and maintainability, the system uses schema-driven APIs and service-layer architecture, enabling structured data validation, clean separation of business logic, and reusable components. Each workflow executes through well-defined schemas, ensuring consistency, reliability, and production-grade performance.
Multi-pipeline conversational assistant using semantic routing to direct queries to a RAG FAQ engine, a Text-to-SQL product search layer, or a fallback clarification agent. Supports multi-turn memory through session summarization.
Domain-restricted RAG system for verified real estate journalism. Automates article ingestion, semantic chunking, and dense embedding generation. Responses grounded strictly in retrieved content — no hallucinations, sub-second retrieval.
End-to-end computer vision pipeline for automated vehicle damage classification using ResNet50 with transfer learning. Layer freezing, dropout tuning, and class-wise evaluation for robust real-world inference.
Multi-source credit pipeline predicting loan default probability. SMOTE-Tomek for severe class imbalance, Optuna for automated hyperparameter search. Deployed as a real-time Streamlit scoring tool for instant risk assessment.
End-to-end ML system estimating insurance premiums using demographic, lifestyle, and medical data. Cohort-based regression models with XGBoost achieved 98% accuracy. Deployed via Streamlit for real-time premium estimation.
A/B testing framework on 50K+ customer records to evaluate a new credit card variant. Z-test and proportion tests with SciPy to measure statistically significant uplift in spending behavior and support data-backed launch decisions.
Full-stack expense manager with a modular architecture separating a Streamlit UI from a FastAPI backend. CRUD operations, business logic enforcement, Pandas-powered analytics, and comprehensive test coverage for both layers.
I'm actively looking for full-time ML Engineer or GenAI Engineer roles. If you're building something that needs reliable AI systems, let's talk.