More

    How AI Chatbots Collect and Share Your Information?

    OpenAI's ChatGPT and Google's Bard

    How AI Chatbots Collect and Share Your Information?

    AI Chatbots, such as OpenAI’s ChatGPT and Google’s Bard, have become much more capable than their ancestors, raising concerns about privacy for the average internet user. These chatbots are trained on mass amounts of data, including from Common Crawl, which has years and petabytes of data scraped from the open web. While these models use a filtered portion of Common Crawl’s data, the sheer size of the model makes it impossible to sanitize the data completely.

     

    Information from third-party sources could also end up in a chatbot’s training set, potentially leading to the regurgitation of sensitive data by the chatbot. Bloomberg columnist Dave Lee tweeted that ChatGPT provided his exact phone number to someone when asked to chat on the encrypted messaging platform Signal, highlighting the concern that the information these learning models have access to is worth considering.

     

    Unintentional Data Collection:

    It’s unlikely that OpenAI or Google would intentionally collect specific information like healthcare data and attribute it to individuals to train their models. However, it is possible that this information could inadvertently end up in their training sets. OpenAI did not respond to inquiries about how it handles personally identifiable information in its training sets, while Google says Bard has guardrails to prevent the sharing of personally identifiable information during conversations.

     

    Generative AI Privacy Risks:

    Generative AI also poses a privacy risk through the usage of the software itself. OpenAI’s privacy policy cites several categories of standard information it collects on users, which could be identifiable. ChatGPT warns users that conversations may be reviewed by its AI trainers to improve systems. Bard does not have a standalone privacy policy but instead uses the blanket privacy document shared by other Google products.

     

    Sensitive Information Warning:

    Users should treat these chatbots with suspicion and assume that any interaction they have with the model is fair game for the companies to use for their benefit. OpenAI discourages users from sharing sensitive information, but the only way to remove personally identifying information provided to ChatGPT is to delete your account, which permanently removes all associated data.

     

    ChatGPT error raises concerns:

    While experts don’t worry about chatbots ingesting individual conversations to learn, the storage and security of conversation data become reasonable concerns. ChatGPT was taken offline briefly in March due to a programming error that revealed users’ chat histories, raising questions about the security of these chat logs.

     

    Data Protection Procedures:

    To build and sustain user trust, these companies must be transparent about their privacy policies and data protection procedures at the front end, says Rishi Jaitly, professor and distinguished humanities fellow at Virginia Tech. Pressing the “clear conversations” action on ChatGPT does not actually delete the user’s data, according to the service’s FAQ page. OpenAI is unable to delete specific prompts.

     

    The problems that generative AI poses for the privacy of the average internet user are a matter of how these bots are trained and how much we interact with them. Users must be careful about the information they share with these chatbots, as it may end up being used for unintended purposes.

    Read More: Apple CEO Tim Cook Starts His Day at 5 AM to Read Customer Feedback

    Apple CEO Tim Cook Starts His Day at 5 AM to Read Customer Feedback

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img