- Published on April 16, 2025
- In AI News
“We're adding the verification process to mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community.”

Illustration by Nalini Nirad
Tibor Blaho, lead engineer at AIPRM, spotted OpenAI’s new support page, which revealed a new system, where organisations will require verification to use the most advanced models and capabilities in the API, like GPT-4.1. The company calls it the ‘Verified Organisation status’.
“At OpenAI, we take our responsibility seriously to ensure that AI is both broadly accessible and used safely. Unfortunately, a small minority of developers intentionally use the OpenAI APIs in violation of our usage policies,” the company stated. “We’re adding the verification process to mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community.”
The verification process requires a valid government-issued ID from one of the over 200 countries supported on the list. Each ID can only verify one organisation every 90 days.
According to the support page, verifying an organisation will take a few minutes and does not require any spending. “Advancing through usage tiers will still unlock higher rate limits across models,” OpenAI further stated.
However, the page pointed out that the verification may not be available to every organisation.
If that is the case, one can continue using the OpenAI platform with access to existing models and the possibility of accessing advanced models later without verification. Furthermore, OpenAI states that the verification process may be available to an organisation at a later date if it is not currently available.
OpenAI recently published a blog post highlighting some of the malicious uses of its AI models and how it has been tackling them. Sharing one such instance in the report, OpenAI stated, “We recently banned a small cluster of accounts operated by threat actors potentially associated with the Democratic People’s Republic of Korea.”
The verification process could be an extra step to ensure that the best of their AI model capabilities are not accessed for malicious use cases. However, could it make the platform restrictive for organisations that are ineligible for verification but still want to harness the full potential of AI?
Ankush Das
I am a tech aficionado and a computer science graduate with a keen interest in AI, Open Source, and Cybersecurity.
Related Posts
Our Upcoming Conference
India's Biggest Conference on AI Startups
April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru
Subscribe to The Belamy: Our Weekly Newsletter
Biggest AI stories, delivered to your inbox every week.
Happy Llama 2025
AI Startups Conference.April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru, India
Data Engineering Summit 2025
May 15 - 16, 2025 | 📍 Hotel Radisson Blu, Bengaluru
MachineCon GCC Summit 2025
June 20 to 22, 2025 | 📍 ITC Grand, Goa
Cypher India 2025
Sep 17 to 19, 2025 | 📍KTPO, Whitefield, Bengaluru, India
MLDS 2026
India's Biggest Developers Summit | 📍Nimhans Convention Center, Bengaluru
Rising 2026
India's Biggest Summit on Women in Tech & AI 📍 Bengaluru