If you ask ChatGPT what it does with personal data that someone contributes to a conversation, this is its answer: “As a language model developed by OpenAI, I do not have the ability to process, store, or use users' personal information, unless it is provided to me in the context of an individual conversation. However, OpenAI, the company that owns ChatGPT, can use that information in certain cases, In accordance with the company's privacy policy.
It is a specific type of data and only in some cases. Must be OpenAI account data, such as username or payment card information, personal information the user exchanges with ChatGPT or the company, user information when interacting with OpenAI accounts on social networks, such as Instagram, Facebook, Medium, Twitter, YouTube and LinkedIn, or data provided by the user The company conducts its surveys or events. With this information, the Company can improve its products and services, create new developments, conduct research, establish direct contact with users, comply with legal obligations for the Company and prevent fraud, abuse of the Service and criminal activities.
This sensitive issue does not only affect new generative AI. Sending an email via Gmail to a friend, or sharing photos or documents in cloud spaces like OneDrive, are everyday actions that allow Providers of these services exchange information with third parties. Companies such as OpenAI, Microsoft, and Google may disclose information to service providers to meet their business needs, as described in their privacy policies.
However, with some exceptions, companies cannot use personal data for other purposes. Ricard Martinez, professor of constitutional law at the University of Valencia, points out that this is strictly prohibited under the General Data Protection Regulation (GDPR): “They expose themselves to high regulatory risks. The company can be punished with a fine equivalent to 4% of its global annual turnover.” In these cases, it may only be used for public interest purposes approved by the regulation, such as archiving, historical, statistical or scientific research, or in the event of a suitability ruling.
Generative AI, like ChatGPT, is fed a large amount of data, some of it personal, and from that information it creates original content. In Spain, these tools receive 377 million visits annually, According to a study. They analyze the information collected, respond to user inquiries and improve their services, although the tool “does not understand the documents it is fed,” warns Borja Asoara, a lawyer expert in digital law.
Recommendation: Be very conservative with chatbots
Spanish Data Protection Agency (AEPD) She suggests For users who do not accept that it chatbot Request unnecessary registration data; Which requests consent without specifying the purpose for which the data will be processed and without allowing its withdrawal at any time, or which carries out transfers to countries that do not provide adequate guarantees. It also recommends “limiting the personal data that is disclosed, not giving personal data to third parties if there are suspicions that the treatment will go beyond the domestic sphere and taking into account that there are no guarantees of the authenticity of the information provided by the chatbot.” The consequences are “emotional harm, misinformation or disinformation.”
Experts agree on the same advice: Don't share personal information with an AI tool. Even ChatGPT itself warns: “Please note that if you share personal, sensitive, or confidential information during a conversation, you should be careful.” It is recommended not to provide sensitive information through online platforms, even in conversations with language models like me.
Deletion of personal data
If personal data has already been shared with AI, despite these recommendations, you can try to delete it. there Form On OpenAI's website for you to be able to remove them: The bad news is that the company warns that “submitting a request does not guarantee that your information will be removed from ChatGPT results.” It must be completed with the true data of the person concerned, who must “swore” in writing to the veracity of what was stated. In addition, the information on the form may be verified with other sources to verify its accuracy. Microsoft also offers Privacy panel To access and delete personal data.
Through legal procedures, Martinez explains that “the user can exercise the right to deletion, if he believes that personal data has been processed unlawfully, which is incorrect and insufficient. You can unsubscribe and withdraw your consent, which is free and unconditional, and the company is obliged to delete all Information. This specialist also emphasizes that there is a right to portability: “More and more applications allow the user to download his entire history and take it with him in a compatible format. The regulation also recommends the anonymization of personal data.
Anonymization, according to the AEPD, consists of transforming personal data into data that cannot be used to identify any person. in it Guide About artificial intelligence therapy (IA), the agency explains that anonymization is one approach to reducing data usage, ensuring that only data necessary for the specified purpose is used.
New Artificial Intelligence Law
After the new European law on artificial intelligence comes into force, companies that manage personal data will have to take into account three keys, as consulting firm Entelgy explained to this newspaper: They will have to disclose how the algorithm works and the way it works. The content it creates is in the European Register; Although it is not mandatory, it is recommended to establish human oversight mechanisms; Finally, LLMs will have to introduce security systems and developers will have an obligation to be transparent about the copyrighted materials they use.
However, the new rule does not conflict with the General Data Protection Regulation. This is how Martinez explains it: “AI that processes personal data, or that creates personal data in the future, will never be able to reach the market if it does not ensure compliance with the RGPD. This is especially evident in high-risk systems, which You should implement a data management model, as well as operational and usage records that ensure traceability.
The next step for AI, Aswara says, is that the personal information collected could be used in a kind of personal pool: “A place where each person has their own repository of documents containing personal data, but the information doesn't leave and therefore, it doesn't go away.” Using it to fuel global generative artificial intelligence.”
You can follow Bass technology in Facebook And s Or sign up here to receive our Weekly newsletter.
“Beer enthusiast. Subtly charming alcohol junkie. Wannabe internet buff. Typical pop culture lover.”