Introduction

The database component of AVID houses full-fidelity information (model metadata, harm metrics, measurements, benchmarks, and mitigation techniques if any) on evaluation examples of harm (sub)categories defined by the taxonomy. The aim here is transparent and reproducible evaluations. It

  • Is expandable to account for novel and hitherto unknown vulnerabilities

  • enables AI developers can freely share evaluation use cases for the benefit of the community

  • Is composed of evaluations submitted in a schematized manner, then vetted and curated.

We are building the database to be both an extension of, and a bridge between, the classic security-related vulnerabilities of the National Vulnerability Database (NVD), case studies of adversarial attacks housed in the MITRE ATLAS, and incidents recorded in the AI Incident Database (AIID) to provide a comprehensive view into the AI Risk landscape. By bringing these disparate sources together, and adding in the unintentional failure states present throughout the AI ecosystem, we provide information to help guide people to build better.

Developers can see the risks in particular models and datasets they want to build on top of, which will help them make better choices with less risk of harm. Communities will have a way to contest systems, models, and datasets that can cause harm to them, which gives them a voice in a conversation they are too often excluded from. Regulators, policy makers, and adjudicating bodies will benefit from having a clear picture of the landscape and which entities represent the greatest sources of harm.

Last updated