Security and Governance for AI Models

Overview
In today’s rapidly evolving digital landscape, AI and ML models have become integral to organizational operations, but they also introduce significant risks. Our Model Risk Management system serves as a frontline defense against adversarial AI threats, focusing on robustness, fairness, and explainability evaluations.
Solution
We developed a comprehensive solution with a battery of attacks to test ML models for robustness, explainability, and fairness. Utilizing FastAPI endpoints, data scientists can expose adversarial attack simulations, fairness evaluations, and model explainability analyses, facilitating seamless integration into existing systems. Models and datasets are securely stored in an AWS S3 bucket in .joblib format for streamlined data storage and retrieval. PyCharm and Dockerdash were employed for coding FastAPI endpoints and containerizing the code, respectively, enhancing portability and ease of management.
