Trading Your Face for a Ghibli Filter? Here’s What You’re Really Giving Up

2 weeks ago 10

There is a crucial aspect that users may have overlooked while indulging in the trend—privacy.

In ChatGPT, We Trust

OpenAI’s GPT-4o image generation model took the internet by storm, with users eagerly generating ‘Ghiblified’ versions of their personal photos

happyllama

While the ethical implications of this trend continue to spark debate, there is another crucial aspect that users may have overlooked while indulging in the trend—privacy.

Eamonn Maguire, head of anti-abuse and account security at Proton, told AIM, “Sharing images with AI chatbots, just like sharing any sensitive information, poses several privacy and security risks that people may not be aware of.” “The trend of creating a ‘Ghibli-style’ image has seen many more people feeding OpenAI photos of themselves and their families,” he added.

What Happens to the Photos After Users Upload Them?

OpenAI’s policy on handling data states, “When you use our services for individuals such as ChatGPT, Sora, or Operator, we may use your content to train our models.”

Hence, as per OpenAI’s official statement, users’ data, including files, images, and audio, can be used to train their models.

Commenting on this, Maguire stated, “Sharing your images directly with OpenAI opens a Pandora’s box of issues. Aside from the risks of data breaches, once you share personal information with AI, you lose control over how they are used.” He mentioned that those photos are then used to train LLMs, which means they could be used to generate content that could be defamatory or even used to harass individuals. 

“Not only that, but many AI models, particularly those used in image generation, rely on huge training datasets,” Maguire further explained.

“This means that in some cases, photos of you, or your likeness, may be used without consent. More nefariously, these images could be used to train facial surveillance AI without your permission. Lastly, your data could be used for personalised ads, or sold to third parties.”

In an exclusive chat with AIM, Joel Latto, a threat advisor at F-Secure, said, “When people upload their photos to ChatGPT for trendy, Ghibli-style transformations, they’re essentially trading their likeness for a fleeting moment of novelty—often without realising how little they’re getting in return.”

Latto explained that this is not a new phenomenon. He observed similar risks with apps like Google Arts and Culture back in 2018 and FaceApp in 2019—both of which prompted warnings from F-Secure about privacy erosion.

This is the reason why F-Secure has been advising against enabling facial recognition features on social media. “What sets this apart with large language models (LLMs) like ChatGPT is the potential scale of exploitation: once your image is in the system, it could theoretically be used to generate highly accurate depictions of you by others. That’s a steep price to pay for a passing fad,” Latto further highlighted.

Data Collection is Nearly Impossible to Avoid, But Knowing the Concern Helps

While security experts acknowledge that it is almost impossible to avoid data collection, one should thoroughly research the privacy policy of AI tools before using them.

Sooraj Sathyanarayanan, a security researcher, told AIM that ChatGPT and comparable solutions usually function under broad terms, giving companies extensive rights to utilise uploaded content. According to him, the data can be used potentially for model training, product improvement, or other purposes, which is not immediately obvious to users. 

“The real concern isn’t just the immediate use, but the downstream data lifecycle that remains opaque to most users. Your photos contain biometric data and potentially reveal sensitive contexts you might not want incorporated into future AI systems,” Sathyanarayanan stressed.

The obvious answer to the problem is to stop using tools like ChatGPT, or only share photos that users are comfortable with being repurposed in any form. Awareness of the privacy implications should help users make informed decisions about what they want to share on the internet, or with any services.

📣 Want to advertise in AIM? Book here

Picture of Ankush Das

Ankush Das

I am a tech aficionado and a computer science graduate with a keen interest in AI, Open Source, and Cybersecurity.

Related Posts

Our Upcoming Conference

India's Biggest Conference on AI Startups

April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru

Download the easiest way to
stay informed

DE&I in India’s Tech 2025

Abhijeet Adhikari

DE&I is redefining the future of India’s tech industry fueling innovation, productivity, and a more inclusive culture. As 2025 approaches, the focus shifts from intent to impact. This report explores

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Happy Llama 2025

AI Startups Conference.April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru, India

Data Engineering Summit 2025

May 15 - 16, 2025 | 📍 Hotel Radisson Blu, Bengaluru

MachineCon GCC Summit 2025

June 20 to 22, 2025 | 📍 ITC Grand, Goa

Cypher India 2025

Sep 17 to 19, 2025 | 📍KTPO, Whitefield, Bengaluru, India

MLDS 2026

India's Biggest Developers Summit | 📍Nimhans Convention Center, Bengaluru

Rising 2026

India's Biggest Summit on Women in Tech & AI 📍 Bengaluru

Read Entire Article