More

    Meta and Partners Develop Standards for AI Image Tagging

    Meta is teaming up with other tech giants to establish industry-wide standards for detecting and labeling AI-generated images, with the goal of improving transparency and minimizing the spread of false content across its platforms.

    Meta is advocating for standardised labels for photographs created by AI across the industry.

    San Francisco: Meta announced on Tuesday that it is collaborating with other technology companies to develop standards that will enhance its ability to identify and categorise artificial intelligence-generated photos that are uploaded with its large user base.

    The Silicon Valley social media giant plans to implement a system within a few months to recognise and label AI-generated photographs shared across its Facebook, Instagram, and Threads platforms.

    “It is not flawless; it will not address every aspect; the technology is not completely developed,” Meta’s head of global affairs Nick Clegg informed AFP.

    Meta and Amazon exceeded expectations with impressive results.

    Meta has been using both visible and invisible tags on photos generated by its AI algorithms since December. Additionally, Meta aims to collaborate with other companies to enhance user transparency, according to Clegg.

    “That is the reason why we have been collaborating with industry partners to establish shared technical standards that indicate when a piece of content has been generated using AI,” the business stated in a blog post.

    This will be accomplished with companies that Meta already collaborates with on AI standards, such as OpenAI, Google, Microsoft, Midjourney, and other organisations participating in the competitive race to dominate the emerging market, according to Clegg.

    However, according to Clegg, although corporations have begun incorporating “signals” in photos produced by their AI tools, the industry has been less prompt in adding such identifying markers to audio or video generated by AI.

    Clegg acknowledges that this extensive labelling, utilizing imperceptible markers, “will not eliminate” the possibility of misleading images being generated, but asserts that “it would certainly reduce” their spread “within the constraints of current technology.”

    Meanwhile, Meta suggested that individuals should approach internet content with a critical mindset, verifying the credibility of the accounts sharing it and being mindful of any features that appear unusual or suspicious.

    Politicians and women have been the main targets for AI-generated manipulated photographs, with digitally altered naked pictures of singer Taylor Swift lately becoming popular on X, previously known as Twitter.

    The emergence of generative AI has also sparked concerns that individuals could employ ChatGPT and other platforms to spread political turmoil through disinformation or AI duplicates.

    Last month, OpenAI said that it would “forbid any political organizations or individuals from using our platform.”

    Meta already requires advertisers to disclose the use of AI in creating or modifying images or audio in political advertisements.

    Stay Updated about the latest technological developments and reviews by following TechTalk, and connect with us on Twitter, Facebook, Google News, and Instagram. For our newest video content, subscribe to our YouTube channel.

    Read More: Huawei Developing Triple-Folding Screen Phone

    Huawei triple-folding screen phone, triple-folding screen phone development, Huawei smartphone innovation

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img