
Anthropic continues moving to capitalize on its growing popularity. The company recently announced Opus 4.7, an update to its AI model with even more powerful and autonomous coding capabilities. Now, users are reporting that Anthropic is using a selective identity verification system for its Claude AI assistant.
How Claude’s new ID verification process works
According to Anthropic’s Support blog, the new system isn’t a blanket requirement for everyone. Instead, it targets “a few use cases,” such as users accessing advanced capabilities or those flagged during routine integrity checks.
To power this system, Anthropic partnered with Persona Identities, a San Francisco-based verification platform. When a prompt appears, users must provide a physical, government-issued photo ID—like a passport or driver’s license—alongside a live selfie. The process typically takes less than five minutes, but it strictly rejects digital copies, student IDs, or employee badges.
Targeting policy violators and age restrictions
Anthropic’s public statements remained somewhat vague initially. However, spokespeople have since clarified to outlets like Engadget and Business Insider that the checks primarily trigger when the system detects potentially fraudulent or abusive behavior. The “ID filter” focuses on four main categories: Usage policy offenders (those who repeatedly bypass rules regarding cyber infringements or restricted content), unsupported locations (access attempts from restricted regions like mainland China, Russia, or North Korea), terms of service violators (general breaches of the platform’s legal agreements), and under-18 users.
Privacy concerns
The move has sparked a wave of criticism across users. Many don’t like the idea of giving an AI company their biometric data and government documents. It can be especially annoying when rivals like ChatGPT and Gemini do not require such hurdles for standard use.
Anthropic tried to calm the waters by emphasizing its role as the “data controller.” While Persona collects and processes the images, Anthropic claims it does not store the actual ID photos on its own servers. Furthermore, the company explicitly stated that this identity data is never used to train its AI models.
Critics argue that adding this friction might hand a competitive advantage to other AI providers. Some users have even drawn parallels to “Know Your Customer” (KYC) requirements in the banking industry, questioning if this is the beginning of a broader trend toward tracked AI usage.
The post Anthropic’s Claude is Getting an Identity Verification System & Users Are Not Happy appeared first on Android Headlines.