The Beginning of Something Different
The news wasn’t officially announced. There were no launch trailers or corporate blog posts. Instead, between January 28 and 29, a flurry of coverage across outlets like Forbes, CoinDesk, and BeInCrypto painted a picture of what might be OpenAI’s most consequential pivot yet. Behind the scenes, the AI powerhouse appears to be working on a new social network. This one, reportedly, wouldn’t just aim to out-feature existing platforms. It would aim to out-human them.
At the centre of this plan is biometric identity. Think of facial recognition or iris scanning as prerequisites to join or participate. The idea is simple, if polarising—stop bots at the gate by verifying that each user is a living person.
What We Know So Far
Though OpenAI hasn’t confirmed the project, reporting has consistently pointed to Sam Altman, the company’s CEO, as a key figure. His co-founding of Worldcoin, now rebranded as World, a digital identity platform that scans users’ irises to generate a unique ID, ties closely into the speculation. Several sources suggest that the World’s Orb device may be considered for the identity layer of this new platform. Apple’s Face ID is also being evaluated as a potentially more accessible alternative.
The team working on the project is reportedly very small—fewer than ten people—according to Forbes and The Verge. This reflects a lean, stealth-mode initiative still in its exploratory phase, not a company-wide rollout.
The market noticed. On January 28, Worldcoin’s token (WLD) surged nearly 40% intraday following the Forbes report, reaching a peak of around $0.63. By the next day, the token had corrected and settled in the $0.52 to $0.55 range—still representing a net gain of approximately 14–16% from pre-rumour levels.
If implemented, the new platform would represent an intersection of three powerful forces: AI, social networking, and verified identity.
Why Biometric Verification Matters
Bot activity has plagued nearly every social platform. A 2023 study from CHEQ, a cybersecurity firm, found that approximately 39.6% of all internet traffic that year came from bots. This includes everything from benign automation to malicious fraud. For advertisers, brands, and individual users, the consequences have been costly. Trust has eroded. Metrics have become suspect. Engagement has become harder to measure.
By introducing biometric verification, OpenAI could attempt to reset the foundation of social interactions online. A network of real people, devoid of automated accounts, would be a substantial differentiator. It would also create new expectations for transparency, accountability, and data ownership in digital communities.
Sam Altman has spoken publicly about the growing sense that much of the internet “feels fake”—a sentiment that has been widely echoed in debates around AI-generated content and engagement bots. The project, if realised, appears to be a direct response to that critique.
Implications for OpenAI
The organisation intends to pursue two different business opportunities through this strategic decision. OpenAI plans to extend its activities beyond artificial intelligence tool creation to develop control systems for its operational environment. The situation carries important consequences. OpenAI would achieve greater operational control by using its models through a secure, identity-verified system that restricts access to its users and content production processes and community management functions.
This platform has the potential to become more than a social network because it connects users through its comprehensive service. Users will establish their presence by verified human identity, through which they will create content, interact with others and conduct business transactions. The value of such an ecosystem would not just be in user numbers but in the integrity of those users.
Competitive Shockwaves
Should OpenAI move forward with this project, it would send shockwaves through an already crowded industry. Platforms like X (formerly Twitter), Meta, TikTok, and Snapchat have all wrestled with identity integrity and bot removal to varying degrees of success. None has adopted a biometric-first approach.
To better understand what OpenAI is aiming for, consider this comparison:
| Feature | OpenAI Rumoured Platform | Traditional Social Media (X, Meta) |
| Verification | Biometric (Iris/Face ID) | Email/Phone/Government ID (optional) |
| Bot Policy | Zero-Tolerance (Hardware Gated) | Detection-based (Post-registration) |
| Identity Layer | Proof of Personhood | User-generated Profiles |
| Content Goal | Human-Authentic | Engagement-Maximising |
By anchoring its platform to personhood, OpenAI would challenge the open-access model that has underpinned social networks for over a decade. Instead of prioritising growth and scale at all costs, the emphasis would shift to verified participation. The knock-on effects for advertising, moderation, and even civic discourse could be profound.
The Privacy Debate
For every promise that biometric verification makes, there is an equally strong concern around privacy. World, formerly Worldcoin, which has conducted iris scans in over 30 countries, has faced scrutiny in several jurisdictions. Temporary suspensions occurred in nations like Spain, Portugal, and Kenya. In India, Orb-based verification paused and resumed under different structures, largely due to concerns over data collection and storage.
For a company like OpenAI—already under the microscope for its AI capabilities—managing user trust in this new space will be complex. The absence of an official statement on data protection, access control, or legal jurisdiction leaves these concerns open.
If the platform were to scale globally, it would need to navigate vastly different data protection laws—from Europe’s GDPR to newer frameworks in Asia and the Americas.
A Test Case for Proof-of-Personhood
Whether the project launches or not, it sets the stage for a broader shift in how the internet treats identity. The rise of generative AI has made it increasingly difficult to distinguish between human- and machine-generated content. Deepfakes, automated influencers, and comment bots are now common in many digital spaces.
OpenAI’s platform, if successful, could become the first large-scale test of a proof-of-personhood model for everyday social interaction. It raises important questions for both platform architects and policymakers:
- Should digital participation be gated by biometric identity?
- Who sets the standards for what counts as ‘human’ online?
- What kind of governance is needed when identity becomes a precondition for speech?
What Remains Unknown
Despite the wave of media attention, much about the project remains unconfirmed. OpenAI has not released any official comment. There is no known name for the platform. No timeline. No product demos.
While World’s Orb is under consideration, it has not been confirmed as the final or exclusive method of verification. Reporting consistently describes the technology as being evaluated—not selected.
Until OpenAI provides direct information, these gaps remain significant. Any assumptions about launch strategy, geographic availability, or monetisation models are premature.
The Broader Stakes
At a time when public trust in online platforms is fragile, the promise of a human-only network has both appeal and risk. The appeal lies in clarity, accountability, and the potential to reintroduce authenticity to digital communities. The risk lies in exclusion, surveillance, and the commodification of identity itself.
For OpenAI, the move could define its future beyond being a toolmaker. For the industry, it might signal a turning point in how platforms are built and what users expect.
Much depends on the execution—and the response from users, regulators, and competitors.
This is a developing story. We will continue to report as verified updates emerge.