Research

My research is centered at the intersection of Software Engineering and Artificial Intelligence (AI), where I address the critical challenges of establishing standards, measurements, and safeguards for AI-enabled software systems. I focus on enhancing the reliability, transparency, and trustworthiness of ML-based systems by developing methodologies that improve their interpretability, robustness, and ethical deployment.

Specifically, I am passionate about advancing Explainable AI (XAI), fine-tuning and deploying large-scale machine learning (ML) models with a focus on reliability and transparency, and innovating new techniques for the testing and debugging of ML/AI systems. My work also encompasses critical areas such as privacy testing, security testing, and fairness auditing. Ultimately, my research seeks to overcome the engineering challenges associated with building AI technologies that are not only technically advanced but also trustworthy, responsible, and aligned with societal and ethical standards.