IBM AI Fairness 360
This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI applicati...
see resources as a list (best for mobile)
This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI applicati...
This extensible open source toolkit can help you comprehend how machine learning models predict labels by various means throughout the AI application lifecyc...
The AI Privacy 360 Toolbox includes several tools to support the assessment of privacy risks of AI-based solutions, and to help them adhere to any relevant p...
This site provides an overview of the FactSheet project, a research effort to foster trust in AI by increasing transparency and enabling governance. (…)
Learn how to integrate Responsible AI practices into your ML workflow using TensorFlow TensorFlow is committed to helping make progress in the responsible de...
Responsible AI Toolbox Capabilities The Responsible AI Toolbox enables AI practitioners to design flexible and custom model assessment and decision-making wo...
The Responsible Artificial Intelligence (RAI) Toolkit provides a centralized process that identifies, tracks, and improves alignment of AI projects to RAI be...
There are tools and metrics out there that help AI actors to build and deploy AI systems that are trustworthy. However, these tools and metrics are often har...
The Atlas of Automation aims to shed light into this black box. It offers a first directory of examples of algorithmic systems that are used in Switzerland, ...