One Google engineer is urging the broader tech industry – and Google’s workforce – to adopt a set of basic security habits, to protect against the surging adoption of AI tools that created a new frontier for privacy preserving AI risks.
Current corporate defenses are already struggling to contain these risks, so where does that leave an average user?
A Google engineer in New York, Harsh Varshney, works for Google, in its Chrome security branch, and sat with Business Insider to share the ways in which people can make use of AI while protecting personal and work data.
Th fast rise of such tools has generated new questions about AI and privacy, especially as more personal details flow through chatbots without much thought.
How Can Individuals Stay Safe While Using AI?
For Varshney, choosing the best AI for privacy starts with the mindset. He treats every chatbot as if it were public, even when the tool feels friendly or personal.
“Sometimes, a false sense of intimacy with AI can lead people to share information online that they never would otherwise,” he said, warning users not to share financial, medical, or identity details.
Another important aspect is trying to understand AI privacy policy around storing and using conversations for improving future models of public chatbots, which actually exposes sensitive data.
With the more persuasive link between AI and privacy, users have a right to know where the data is sent. Varshney compares it to speaking loudly in a cafe rather than behind closed office doors.
Enterprise AI systems are designed to limit how much user conversations are reused, so, they’re safer for work tasks. But generative AI data privacy is still at risk, though. Even the supposedly secure tools will remember past chats if history is allowed to extend too far.
“Once, I was surprised that an enterprise Gemini chatbot was able to tell me my exact address,” Varshney highlighted. And after realizing the tool had stored it from an earlier email draft, an example of why privacy preserving AI needs active user habits.
He started recently clearing up the chat history from time to time and using temporary modes, which he believed helped users who look for the best AI for privacy. Another good idea is to check the AI privacy policy setting.
Varshney suggests switching off any options that would allow the conversations to be used for training. These practices raise broader concerns about privacy preserving AI technology increasingly relied on by workers for everyday tasks.
The debate around AI and privacy is also ethical.
Questions of AI privacy ethics now affect how companies design and release new models.
Experts warn of generative AI privacy concerns, such as hidden data retention and unclear consent.
For users, data privacy and security in AI depend on simple actions: not oversharing information, cleaning up past histories, and using only trusted tools for this purpose.
Without these steps, AI privacy violations will continue by exposing data to hackers or brokers.
Governments are starting to respond with AI privacy regulations, but Varshney says personal caution still matters. He suggests reviewing regulations such as the OpenAI privacy policy for how this data is stored or can be reused.
“AI technology is incredibly powerful, but we must be cautious to ensure our data and identities are safe when we use it,” said Varshney. After all, privacy preserving AI is not a technical feature; it is a collective responsibility.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.