There are three core problems with consent in AI: the scope, the temporality, and the autonomy trap.

Illustration by Nalini Nirad
To some, privacy is a myth; to others, it is strictly non-negotiable. When it comes to privacy, however, privacy policies and consent play a central role, especially as users interact with AI systems in today’s world.
Giada Pistilli, principal ethicist at Hugging Face, shared her concern in a blog post titled ‘I clicked “I Agree”, but what am I really consenting to?’. The post is an interesting analysis of what users consent to, where the problem lies, and what can be done to address it.
Pistilli’s argument revolves around the difference between the traditional understanding of consent, as an informed agreement to the collection and use of data, and the reality of how the data is fed into AI systems.
AIM consulted experts to determine whether the traditional privacy consent system is sufficient in an AI-driven world.
How Complex is Too Complex?
“It’s a complex issue. Legally, users consent to how their data will be handled by agreeing to the Terms of Service and Privacy Policy before using any AI system—so, in theory, they’re informed,” Joel Latto, a threat advisor at F-Secure, told AIM.
Latto added, “In practice, though, both companies and users know that almost no one reads those dense documents.” The checkbox for consent is a legal shield for the company instead of being a safeguard for users, he warned.
Eamonn Maguire, head of anti-abuse and account security at Proton, told AIM, “Just like the concern over the amount of data that big tech collects from us while we browse online, the sheer number of functions that AI is being applied to means the more sensitive data it handles, the more difficult it is for people to avoid sharing their information with AI.”
Maguire expressed concern over the amount of power and data being accumulated in the hands of a few AI companies. He stated, “There needs to be a change – before it’s too late.”
In her blog post, Pistilli explained that there are three core problems with consent in AI: the scope, the temporality, and the autonomy trap.
The scope problem describes that users cannot predict how the data will be used even when companies ask for permission. She shared an example of a voice actor who agrees to record an audiobook, who can never know if the AI trained on its data could later be used to make political endorsements, provide finance advice, and more.
The second issue she highlights is that AI creates an open-ended relationship between the user and how their data is used. Once the data is fed in, the user will find it challenging to extract its influence on the AI system.
The third concern is how a user agrees to an AI’s privacy policy without considering the future usage of the data.
Pistilli shared the example of Target, a retail company, which revealed a teenage girl’s pregnancy before her father knew it! And, it is described as an instance where our consented data is used by AI to make predictions.
Current Privacy Consent Models Fail
Sooraj Sathyanarayanan, a security researcher, told AIM that the existing privacy consent models fail for AI systems because they present complex legal agreements that most users do not read. They assume data uses are known at collection time, and offer binary accept/reject choices.
Pistilli wrote that current consent frameworks, such as the European GDPR, often fail to adequately address these complex data flows and their privacy implications.
What Can Be Done About It?
Latto spoke about a solution that advocates for an opt-in model, where user data isn’t automatically fed into training datasets unless explicitly permitted. He highlighted that a solution like this might slow the development of LLMs, which is why companies do not take this approach.
“Take DeepSeek for example: when it surged in popularity overnight, it launched with virtually no privacy controls, likely by design, yet users flocked to it anyway. This highlights a critical gap in user education, which I’m personally committed to addressing in my own work,” he said.
Sathyanarayanan put forth his idea of an improved system to AIM, which would require detailed disclosure of key details on the privacy impact, explanation of the risks and uses of the data in simpler terms, and introduce granular permissions for the user to control the data sharing.
Additionally, he highlighted the need for mechanisms for revoking consent as systems evolve and independent oversight to ensure compliance with AI systems’ stated purposes.
Maguire told AIM, “Privacy policies and consent agreements need to be more specific, and the ways people’s data is used should be front-and-centre of any AI’s privacy agreement.”
Ankush Das
I am a tech aficionado and a computer science graduate with a keen interest in AI, Open Source, and Cybersecurity.
Related Posts
Our Upcoming Conference
India's Biggest Conference on AI Startups
April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru
Subscribe to The Belamy: Our Weekly Newsletter
Biggest AI stories, delivered to your inbox every week.
Happy Llama 2025
AI Startups Conference.April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru, India
Data Engineering Summit 2025
May 15 - 16, 2025 | 📍 Hotel Radisson Blu, Bengaluru
MachineCon GCC Summit 2025
June 20 to 22, 2025 | 📍 ITC Grand, Goa
Cypher India 2025
Sep 17 to 19, 2025 | 📍KTPO, Whitefield, Bengaluru, India
MLDS 2026
India's Biggest Developers Summit | 📍Nimhans Convention Center, Bengaluru
Rising 2026
India's Biggest Summit on Women in Tech & AI 📍 Bengaluru