A top US official stated on Monday that safety precautions should be integrated into systems from the beginning rather than being added later due to the possible threat posed by artificial intelligence’s (AI) rapid development.
“We’ve normalized a world in which consumers are expected to patch technology items that have vulnerabilities when they come off the assembly line. According to Jen Easterly, director of the US Cybersecurity and Infrastructure Security Agency, “We can’t live in that world with AI.”
“It is moving too fast and it is too powerful,” she said over the phone during her meeting with Sami Khoury, the director of Canada’s Centre for Cyber Security, in Ottawa.
On the same day that Easterly spoke, agencies from eighteen nations, including the US, approved new British-developed AI cyber security rules that prioritize secure design, development, deployment, and maintenance.
“We have to look at security throughout the lifecycle of that AI capability,” Khoury stated.
Leading AI developers decided earlier this month to collaborate with governments in order to test new frontier models prior to their release, in an effort to mitigate the hazards associated with this quickly evolving technology.
“I think we have done as much as we possibly could do at this point in time, to help come together with nations around the world, with technology companies, to set out from a technical perspective how to build these capabilities as securely and safely as possible,” Easterly said.
Stay Updated about the latest technological developments and reviews by following TechTalk, and connect with us on Twitter, Facebook, Google News, and Instagram. For our newest video content, subscribe to our YouTube channel.
Read More: Nvidia’s 2023 Outlook: Growth Signals and Warning Signs