More

    U.S., U.K., and 16 Partners Announce Guidelines for Secure AI System Development

    The United States, the United Kingdom, and sixteen other international partners have announced new recommendations for the development of secure artificial intelligence (AI) systems.

    According to the U.S. Cybersecurity and Infrastructure Security Agency (CISA), “the approach prioritises ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organisational structures where secure design is a top priority.”

    The National Cyber Security Centre (NCSC) stated that the objective is to raise AI’s cyber security levels and assist in making sure the technology is created, developed, and used in a secure manner.

    By guaranteeing that new tools are thoroughly tested before going public, putting safeguards in place to address societal harms like bias discrimination and privacy concerns, and setting up reliable ways for consumers to identify AI-generated content, the guidelines also build upon the ongoing efforts of the U.S. government to manage the risks posed by AI.

    Companies are also required by the agreements to support the identification and reporting of vulnerabilities in their AI systems by third parties via a bug bounty programme, ensuring that the vulnerabilities are quickly identified and remedied.

    The most recent recommendations “help developers ensure that cyber security is both an essential precondition of AI system safety and integral to the development process from the outset and throughout, known as secure by design’ approach,” the NCSC stated.

    This covers all important aspects of the AI system development life cycle, including secure design, secure development, deployment, operation, and maintenance. It necessitates that businesses model the risks to their systems and protect their infrastructure and supply chains.

    The authorities stated that the goal is to counter adversarial assaults that target AI and machine learning (ML) systems and attempt to induce undesired behaviour in a number of ways, such as altering a model’s classification, granting users the ability to take unauthorised actions, and retrieving private data.

    “There are many ways to achieve these effects, such as prompt injection attacks in the large language model (LLM) domain or deliberately corrupting the training data or user feedback (known as ‘data poisoning’),” observed the NCSC.

    Stay updated about the latest technological developments and reviews by following TechTalk, and connect with us on Twitter, Facebook, Google News, and Instagram. For our newest video content, subscribe to our YouTube channel.

    Read More: iOS battery widget added to the Xiaomi Smart Band 8 NFC variant

    iOS battery widget

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img