The Responsible Artificial Intelligence (RAI) Toolkit provides a centralized process that identifies, tracks, and improves alignment of AI projects to RAI best practices and the DoD AI Ethical Principles, while capitalizing on opportunities for innovation. The RAI Toolkit provides an intuitive flow guiding the user through tailorable and modular assessments, tools, and artifacts throughout the AI product lifecycle. The process enables traceability and assurance of responsible AI practice, development, and use.

Link

Source: Department of Defense’s Chief Digital and Artificial Intelligence Office (CDAO)


Ethical Principles: Trustworthy AI |