OpenAI Confronts Privacy Complaints Regarding Chatbot Accuracy 

OpenAI is facing a privacy complaint from the group NOYB, which claims the company failed to correct erroneous information generated.

OpenAI is facing a privacy complaint from the advocacy group NOYB, which claims the company failed to correct erroneous information generated by its chatbot, potentially violating EU privacy regulations. 

ChatGPT, known for sparking the trend of generative AI in late 2022, can simulate human conversation and perform various tasks, such as summarizing texts, writing poems, and suggesting ideas for theme parties. 

 According to NOYB, the public figure who filed the complaint asked ChatGPT for his date of birth and received inaccurate information multiple times. The chatbot should have informed him that it does not possess such private data. Consequently, he requested the company to delete or correct the data, but OpenAI refused, explaining that it was impossible to do so. 

 The complaint was filed with the Austrian data protection authority, seeking to open an investigation into the Microsoft-backed company concerning its data processing practices and the measures it takes to ensure the accuracy of personal data handled by its large language models. 

 On the other hand, NOYB’s data protection lawyer, Maartje de Graaf, emphasized the need for technology to adhere to legal requirements, asserting that if a system is not capable of generating accurate and transparent information, it should not be used to process data concerning individuals.  

For its part, Sam Altman’s OpenAI has acknowledged that its chatbot ChatGPT tends to provide responses that may sound credible but are sometimes incorrect or nonsensical, which it considers a challenging issue to resolve. 

This complaint highlights the critical importance of ensuring the accuracy and transparency of AI-generated information, especially regarding personal data, to comply with privacy regulations. 

 It is also worth noting that this is not the first time OpenAI has faced such complaints. For instance, users have previously raised concerns about ChatGPT retaining their personal information and conversations for extended periods. Additionally, in other cases, it was reported that GPT generated inaccurate AI content related to medical advice and historical events. 

These ongoing issues underscore the challenges OpenAI faces in balancing innovation with user privacy and reliability. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.