Glossary of AI Terminology

What Is False Positive Parity?

False Positive Parity

Commonly used as a model fairness metric, false positive parity measures whether a model incorrectly predicts something as more likely for a sensitive group than for the base group.

Example:

For example, a financial services company might want to quantify whether its fraud model is falsely predicting higher rates of fraud for customers in lower-income zip codes (the sensitive group) compared to customers in median-income class zip codes (the base group).

False Positive Parity

Bi-weekly AI Research Paper Readings

Stay on top of emerging trends and frameworks.