Use case cards: a use case reporting framework inspired by the European AI Act
Despite recent efforts by the Artificial Intelligence (AI) community to move towards standardised procedures for documenting models, methods, systems or data...
Despite recent efforts by the Artificial Intelligence (AI) community to move towards standardised procedures for documenting models, methods, systems or data...
The first 100 Brilliant Women in AI Ethics list was published in 2018 and since then, we’ve published a list every year to recognize the diverse voices toili...
The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. BERT, its v...
Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In ...
This article discusses transparency obligations introduced in the Artificial Intelligence Act, the recently proposed European regulatory framework for artifi...
Rising concern for the societal implications of artificial intelligence systems has inspired a wave of academic and journalistic literature in which deployed...
We have developed capAI, a conformity assessment procedure for AI systems, to provide an independent, comparable, quantifiable, and accountable assessment of...
The European Commission presented in April 2021 the AI Act, its proposed legislative framework for Artificial Intelligence, which sets the necessary regulato...
We introduce the Foundation Model Transparency Index to assess the transparency of foundation model developers. We design the Index around 100 transparency i...
UNESCO’s first-ever global standard on AI ethics - the ‘Recommendation on the Ethics of Artificial Intelligence’ - was adopted by all 193 Member States in No...
The Recommendation, in paragraph 49, states: ‘UNESCO recognizes that Member States will be at different stages of readiness to implement this Recommendation,...
The Recommendation on the Ethics of AI provides a framework to ensure that AI developments align with the promotion and protection of human rights and human ...
IEEE CertifAIEd is a certification program for assessing ethics of Autonomous Intelligent Systems (AIS) to help protect, differentiate, and grow product adop...
Data protection and security breaches were reported to the FDPIC by well-meaning hackers commonly referred to as ‘white hat hackers’. For example, after rece...
This page provides details about the CDEI portfolio of AI assurance techniques and how to use it. (…)
This website contains the Assessment List for Trustworthy AI (ALTAI). ALTAI was developed by the High-Level Expert Group on Artificial Intelligence set up by...
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial in...
The AI assessment catalog of Fraunhofer IAIS offers a structured guideline that can be used to concretize abstract quality standards into application-specifi...
This report was commissioned to (1) inform policy makers and regulatory stakeholders about noteworthy approaches to AI risk assessment, including leading pra...
Switzerland’s energy and health sectors will see the rapid adoption of AI systems in the coming years. Anticipated benefits include better, personalised heal...
In a proof-of-concept collaboration between the IEEE Standards Association (IEEE SA) and the City of Vienna, the newly developed IEEE CertifAIEd framework wa...
Cybersecurity has evolved significantly as technology has advanced. Increased interconnectivity and digital transformation have both significantly altered ho...
This study provides the first systematic and quantitative evaluation of the Swiss cybersecurity research landscape. It aims to guide policymakers, universiti...
The four-quadrant diagram is the actual core of Technology Outlook: The chart shows four fields with different categories. The technologies are positioned ac...
In-depth analysis of the latest tech trends, with highlights of the top research from the world’s standard-bearer for computing professionals. (…)
One of the reasons for the rapid acceleration of cybercrime is the lower barrier to entry for malicious actors. Cybercriminals have evolved their business mo...
As the use of AI evolves, boards and the C-suite should consider these key AI-related issues attracting US policymaker attention. (…)
Deloitte’s 14th annual Tech Trends report explores the impact of emerging technologies on groundbreaking innovations and foundational business areas. Stories...
As Artificial Intelligence (AI) advances rapidly, so does its potential to be used in cybercrime. (…)
CyberPeace Insititute principles on the usage of AI: data collection and processing, knowledge production, talent management, stakeholder engagement, and cri...
As organizations struggle with security breaches, cyber AI can be a force multiplier, enabling security teams not only to respond faster than cyberattackers ...
This white paper details our plans for implementing a pro-innovation approach to AI regulation. We’re seeking views through a supporting consultation. (…)
Experts believe that Artificial Intelligence (AI) and Machine Learning (ML) have both negative and positive effects on cybersecurity. AI algorithms use train...
This is a compilation of books, papers, and resources that AI Ethicists recommend to help you manage your AI initiatives responsibly or to in general get to ...
The IEEE Global Initiative’s mission is, ‘To ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educate...
The project has then gathered a wealth of online resources - the RRI Toolkit - to help stakeholders across Europe put Responsible Research and Innovation int...
The Global AI Standards Repository is the world’s first centralized, transparent notification system that captures AI and Autonomous and Intelligent Systems ...
The new national cyberstrategy (NCS) was approved by the Federal Council during its meeting on 5 April 2023 and by the cantons during today’s plenary session...
The AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the des...
The Playbook provides suggested actions for achieving the outcomes laid out in the AI Risk Management Framework (AI RMF) Core (Tables 1 - 4 in AI RMF 1.0). S...
The impact of artificial intelligence or autonomous and intelligent systems (A/IS) on humans is measured by this standard. The positive outcome of A/IS on hu...
A set of ontologies with different abstraction levels that contain concepts, definitions, axioms, and use cases that assist in the development of ethically d...
Specific methodologies to help employers in accessing, collecting, storing, utilizing, sharing, and destroying employee data are described in this standard. ...
The requirements for a systems/software engineering process for privacy-oriented considerations regarding products, services, and systems utilizing employee,...
Measurable, testable levels of transparency, so that autonomous systems can be objectively assessed, and levels of compliance determined, are described in th...
A set of processes by which organizations can include consideration of ethical values throughout the stages of concept exploration and development is establi...
The EASA Artificial Intelligence Roadmap 2.0 outlines the Agency’s vision for the safety and ethical considerations of AI in aviation. (…)
The new home of the AI standards community. Dedicated to knowledge sharing, capacity building, and world-leading research, the Hub aims to build a vibrant an...
The first 100 Brilliant Women in AI Ethics list was published in 2018 and since then, we’ve published a list every year to recognize the diverse voices toili...
In a proof-of-concept collaboration between the IEEE Standards Association (IEEE SA) and the City of Vienna, the newly developed IEEE CertifAIEd framework wa...
Cybersecurity has evolved significantly as technology has advanced. Increased interconnectivity and digital transformation have both significantly altered ho...
In-depth analysis of the latest tech trends, with highlights of the top research from the world’s standard-bearer for computing professionals. (…)
One of the reasons for the rapid acceleration of cybercrime is the lower barrier to entry for malicious actors. Cybercriminals have evolved their business mo...
As the use of AI evolves, boards and the C-suite should consider these key AI-related issues attracting US policymaker attention. (…)
As Artificial Intelligence (AI) advances rapidly, so does its potential to be used in cybercrime. (…)
As organizations struggle with security breaches, cyber AI can be a force multiplier, enabling security teams not only to respond faster than cyberattackers ...
Experts believe that Artificial Intelligence (AI) and Machine Learning (ML) have both negative and positive effects on cybersecurity. AI algorithms use train...
This is a compilation of books, papers, and resources that AI Ethicists recommend to help you manage your AI initiatives responsibly or to in general get to ...
This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI applicati...
This extensible open source toolkit can help you comprehend how machine learning models predict labels by various means throughout the AI application lifecyc...
The AI Privacy 360 Toolbox includes several tools to support the assessment of privacy risks of AI-based solutions, and to help them adhere to any relevant p...
This site provides an overview of the FactSheet project, a research effort to foster trust in AI by increasing transparency and enabling governance. (…)
Learn how to integrate Responsible AI practices into your ML workflow using TensorFlow TensorFlow is committed to helping make progress in the responsible de...
Responsible AI Toolbox Capabilities The Responsible AI Toolbox enables AI practitioners to design flexible and custom model assessment and decision-making wo...
The Responsible Artificial Intelligence (RAI) Toolkit provides a centralized process that identifies, tracks, and improves alignment of AI projects to RAI be...
There are tools and metrics out there that help AI actors to build and deploy AI systems that are trustworthy. However, these tools and metrics are often har...
The Atlas of Automation aims to shed light into this black box. It offers a first directory of examples of algorithmic systems that are used in Switzerland, ...
The European Commission presented in April 2021 the AI Act, its proposed legislative framework for Artificial Intelligence, which sets the necessary regulato...
Switzerland’s energy and health sectors will see the rapid adoption of AI systems in the coming years. Anticipated benefits include better, personalised heal...
This study provides the first systematic and quantitative evaluation of the Swiss cybersecurity research landscape. It aims to guide policymakers, universiti...
The four-quadrant diagram is the actual core of Technology Outlook: The chart shows four fields with different categories. The technologies are positioned ac...
Deloitte’s 14th annual Tech Trends report explores the impact of emerging technologies on groundbreaking innovations and foundational business areas. Stories...
CyberPeace Insititute principles on the usage of AI: data collection and processing, knowledge production, talent management, stakeholder engagement, and cri...
This white paper details our plans for implementing a pro-innovation approach to AI regulation. We’re seeking views through a supporting consultation. (…)
This report was commissioned to (1) inform policy makers and regulatory stakeholders about noteworthy approaches to AI risk assessment, including leading pra...
The IEEE Global Initiative’s mission is, ‘To ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educate...
The impact of artificial intelligence or autonomous and intelligent systems (A/IS) on humans is measured by this standard. The positive outcome of A/IS on hu...
A set of ontologies with different abstraction levels that contain concepts, definitions, axioms, and use cases that assist in the development of ethically d...
Specific methodologies to help employers in accessing, collecting, storing, utilizing, sharing, and destroying employee data are described in this standard. ...
The requirements for a systems/software engineering process for privacy-oriented considerations regarding products, services, and systems utilizing employee,...
Measurable, testable levels of transparency, so that autonomous systems can be objectively assessed, and levels of compliance determined, are described in th...
A set of processes by which organizations can include consideration of ethical values throughout the stages of concept exploration and development is establi...
The new home of the AI standards community. Dedicated to knowledge sharing, capacity building, and world-leading research, the Hub aims to build a vibrant an...
We introduce the Foundation Model Transparency Index to assess the transparency of foundation model developers. We design the Index around 100 transparency i...
The Recommendation, in paragraph 49, states: ‘UNESCO recognizes that Member States will be at different stages of readiness to implement this Recommendation,...
The Recommendation on the Ethics of AI provides a framework to ensure that AI developments align with the promotion and protection of human rights and human ...
IEEE CertifAIEd is a certification program for assessing ethics of Autonomous Intelligent Systems (AIS) to help protect, differentiate, and grow product adop...
This website contains the Assessment List for Trustworthy AI (ALTAI). ALTAI was developed by the High-Level Expert Group on Artificial Intelligence set up by...
The AI assessment catalog of Fraunhofer IAIS offers a structured guideline that can be used to concretize abstract quality standards into application-specifi...
The project has then gathered a wealth of online resources - the RRI Toolkit - to help stakeholders across Europe put Responsible Research and Innovation int...
There are tools and metrics out there that help AI actors to build and deploy AI systems that are trustworthy. However, these tools and metrics are often har...
The Global AI Standards Repository is the world’s first centralized, transparent notification system that captures AI and Autonomous and Intelligent Systems ...
This page provides details about the CDEI portfolio of AI assurance techniques and how to use it. (…)
The Atlas of Automation aims to shed light into this black box. It offers a first directory of examples of algorithmic systems that are used in Switzerland, ...
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial in...
The Responsible Artificial Intelligence (RAI) Toolkit provides a centralized process that identifies, tracks, and improves alignment of AI projects to RAI be...
UNESCO’s first-ever global standard on AI ethics - the ‘Recommendation on the Ethics of Artificial Intelligence’ - was adopted by all 193 Member States in No...
The AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the des...
The Playbook provides suggested actions for achieving the outcomes laid out in the AI Risk Management Framework (AI RMF) Core (Tables 1 - 4 in AI RMF 1.0). S...
Data protection and security breaches were reported to the FDPIC by well-meaning hackers commonly referred to as ‘white hat hackers’. For example, after rece...
The EASA Artificial Intelligence Roadmap 2.0 outlines the Agency’s vision for the safety and ethical considerations of AI in aviation. (…)
This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI applicati...
This extensible open source toolkit can help you comprehend how machine learning models predict labels by various means throughout the AI application lifecyc...
The AI Privacy 360 Toolbox includes several tools to support the assessment of privacy risks of AI-based solutions, and to help them adhere to any relevant p...
This site provides an overview of the FactSheet project, a research effort to foster trust in AI by increasing transparency and enabling governance. (…)
Learn how to integrate Responsible AI practices into your ML workflow using TensorFlow TensorFlow is committed to helping make progress in the responsible de...
Responsible AI Toolbox Capabilities The Responsible AI Toolbox enables AI practitioners to design flexible and custom model assessment and decision-making wo...
Despite recent efforts by the Artificial Intelligence (AI) community to move towards standardised procedures for documenting models, methods, systems or data...
The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. BERT, its v...
Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In ...
This article discusses transparency obligations introduced in the Artificial Intelligence Act, the recently proposed European regulatory framework for artifi...
Rising concern for the societal implications of artificial intelligence systems has inspired a wave of academic and journalistic literature in which deployed...
We have developed capAI, a conformity assessment procedure for AI systems, to provide an independent, comparable, quantifiable, and accountable assessment of...
The Trust Valley is a public-private partnership aimed at promoting the excellence of the Lake Geneva region in the field of digital trust and cybersecurity....
The Swiss Academy of Engineering Sciences SATW is the most important network of experts for engineering sciences in Switzerland and is in contact with the hi...
Partnership on AI (PAI) is a non-profit partnership of academic, civil society, industry, and media organizations creating solutions so that AI advances posi...
The Cyber-Defence Campus (CYD Campus) was founded in 2019 in order to strengthen Switzerland’s cyber defence and to anticipate cyber developments more quickl...
The Trust Valley is a public-private partnership aimed at promoting the excellence of the Lake Geneva region in the field of digital trust and cybersecurity....
The Swiss Academy of Engineering Sciences SATW is the most important network of experts for engineering sciences in Switzerland and is in contact with the hi...
Partnership on AI (PAI) is a non-profit partnership of academic, civil society, industry, and media organizations creating solutions so that AI advances posi...
The Cyber-Defence Campus (CYD Campus) was founded in 2019 in order to strengthen Switzerland’s cyber defence and to anticipate cyber developments more quickl...
The new national cyberstrategy (NCS) was approved by the Federal Council during its meeting on 5 April 2023 and by the cantons during today’s plenary session...