More

    OpenAI’s Approach to Catastrophic AI Risks

    OpenAI releases its 'Preparedness Framework' to bridge gaps in studying catastrophic risks from AI. Focused on evaluating advanced models, this framework categorizes risks in cybersecurity, biotechnology, persuasive influence, and model autonomy. Dive into OpenAI's commitment to responsible and safe AI development.

    Guidelines for assessing AI hazards are released by OpenAI.

    NEW YORK: On Monday, ChatGPT creator OpenAI released their most recent set of rules for assessing the catastrophic risks associated with artificial intelligence in models that are presently in development.

    The news comes a month after the CEO Sam Altman was sacked by the board of the company, only to have him reinstated a few days later due to opposition from the workforce and investors.

    US media reported that board members had chastised Altman for pushing for OpenAI’s faster development, even at the expense of avoiding some queries regarding the technology’s potential hazards.

    The corporation claims, We believe the scientific study of catastrophic risks from AI has fallen far short of where we need to be, in a PreparednessFramework released on Monday. It states that the framework should assist in addressing this gap. The October announcement of a monitoring and evaluation team will centre on frontier models, which are AI programmes with skills that surpass even the most sophisticated ones that are currently under development.

    The group will evaluate every new model and place it into one of four primary risk categories, ranging from low to critical.

    The framework states that only models with a risk score of medium or lower are eligible for deployment.

    The first area focuses on cybersecurity and the model’s capacity for significant cyberattacks.

    The second will assess the program’s likelihood of contributing to the development of a chemical concoction, an organism (like a virus), or a nuclear weapon—all of which might be dangerous to people.

    The third category focuses on the model’s ability to persuade people, including how much it can change how people behave. The model’s potential autonomy, namely its ability to rebel against the programmers who created it, is the subject of the final danger category.

    Stay Updated about the latest technological developments and reviews by following TechTalk, and connect with us on Twitter, Facebook, Google News, and Instagram. For our newest video content, subscribe to our YouTube channel.

    Read More: Wateen’s Cloud Services: Elevate Your Business with Seamless Migration

    Wateen Cloud Migration

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img