Research

My research is centered at the intersection of Software Engineering and Artificial Intelligence (AI), where I address the critical challenges of establishing standards, measurements, and safeguards for AI-enabled software systems. My work focuses on ensuring the reliability and trustworthiness of AI technologies by developing methodologies that make machine learning (ML)-based systems more transparent, interpretable, and robust.

I am particularly passionate about advancing Explainable AI (XAI) methodologies and exploring approaches to the testing and debugging of ML-based AI systems, including critical areas such as privacy testing, security testing, and fairness testing. Ultimately, my research seeks to overcome the engineering challenges associated with ensuring that AI technologies are not only advanced but also trustworthy, responsible, and aligned with ethical standards.