The establishment of the US AI Safety Institute Consortium (AISIC) was officially disclosed by US Secretary of Commerce Gina Raimondo. This consortium aims to foster collaboration among AI developers, practitioners, scholars, governmental and industrial researchers, as well as civil society entities, with the overarching goal of advancing the development and implementation of AI systems that are both safe and reliable.
Operated under the umbrella of the US AI Safety Institute (USAISI), the consortium is set to actively engage in key initiatives delineated in President Biden's significant Executive Order. These initiatives encompass the formulation of protocols for red-teaming, the assessment of capabilities, risk mitigation strategies, as well as ensuring the safety, security, and authenticity of synthetic content through watermarking techniques.
“The US government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the US AI Safety Institute Consortium is set up to help us do,” said Secretary Raimondo.
“To keep pace with AI, we have to move fast and make sure everyone – from the government to the private sector to academia – is rowing in the same direction,” said Bruce Reed, White House Deputy Chief of Staff.
The consortium comprises over 200 member entities, including leading companies, startups, academic and civil society groups, and professionals deeply involved in AI. It forms the largest assembly of test and evaluation teams to date, with a primary focus on laying the groundwork for a new measurement science in AI safety. Additionally, it incorporates state and local governments, non-profits, and international partners to collaborate on the development of interoperable safety tools globally.