The US, UK, and other nations struck an agreement to create “secure by design” AI.
On Sunday, the US, UK, and over a dozen other nations unveiled what a senior US official called the first comprehensive international agreement on safeguarding AI against rogue actors, encouraging businesses to develop AI systems that are “secure by design.”
The 18 nations said in a 20-page paper released on Sunday that businesses creating and utilising AI must create and implement technology in a way that protects consumers and the general public from abuse.
The mostly broad guidelines included in the non-binding pact include safeguarding data against manipulation, keeping an eye out for abuse of AI systems, and screening software providers.
However, Jen Easterly, the head of the U.S. Cybersecurity and Infrastructure Security Agency, noted that it was significant that so many nations were endorsing the notion that AI systems should prioritize safety.
“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Easterly told Reuters. He added that the standards represent “an agreement that the most important thing that needs to be done at the design phase is security.”
The accord is the most recent in a string of global government initiatives, most of which lack teeth, to influence the advancement of artificial intelligence (AI), a technology whose impact is becoming more and more apparent in business and society at large.
The 18 nations that ratified the new guidelines include the US, the UK, Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.
The approach addresses concerns about preventing hackers from taking advantage of AI technology and offers suggestions like releasing models only after sufficient security testing.
It doesn’t address difficult issues like when and how to utilize AI, or how these models’ data is collected.
The emergence of AI has given rise to a number of worries, such as the worry that technology might be used to subvert democracy, intensify fraud, or cause a sharp decline in employment, among other negative effects.
Legislators in Europe are developing AI regulations, putting them ahead of those in the US. Recently, France, Germany, and Italy came to a consensus on the governance of artificial intelligence that encourages “mandatory self-regulation through codes of conduct” for what are known as foundation models of AI, which are intended to generate a wide range of outputs.
Although the Biden administration has been pressuring Congress to enact legislation pertaining to AI, the divided chamber has not made much progress in this regard.
With a new executive order issued in October, the White House aimed to strengthen national security while lowering the risks that AI poses to workers, consumers, and minority groups.
Stay Updated about the latest technological developments and reviews by following TechTalk, and connect with us on Twitter, Facebook, Google News, and Instagram. For our newest video content, subscribe to our YouTube channel.
Read More: U.S., U.K., and 16 Partners Announce Guidelines for Secure AI System Development