Between Shifts and Search Bars
The first time I tried OpenEvidence, I was between patients. A junior colleague had asked about a rare complication — something I hadn’t seen since residency. I typed in the query and, within seconds, had a sourced summary that would’ve otherwise taken an hour of textbook flipping and database crawling. I didn’t bookmark it. I started using it.
That moment wasn’t remarkable in itself. What’s changed is how often it happens now. What started as a side-tab tool has moved into the rhythm of my day. As of late 2025, OpenEvidence is used by over 40% of doctors in the U.S. and is installed in more than 10,000 hospitals. Its daily consultation volume now matches what it handled monthly a year ago.
These numbers, though impressive, only partially explain why it’s become embedded in the daily clinical routine.
But Wait — What Exactly Is OpenEvidence, and How Does It Work?
At its core, OpenEvidence is a clinical AI platform designed to answer medical questions based strictly on peer-reviewed literature. No blogs, no patient forums, no speculative summaries. It reads and indexes over 35 million academic papers, pulling information directly from journals like JAMA, NEJM, BMJ, and others.
You enter a query — for instance, the latest management approach for a rare autoimmune complication — and within seconds, the platform returns a structured, evidence-linked response. Each answer includes cited sources and often links to full-text articles, thanks to formal partnerships with the JAMA Network and NEJM Group. That matters more than it sounds. You’re not seeing a chatbot guess. You’re seeing citations you can immediately verify.
OpenEvidence is not embedded in hospital EHRs (at least not yet), but it’s available via a mobile app for verified U.S.-based healthcare professionals. It’s designed to be standalone, frictionless, and immediate. The app is free to use. Revenue comes from advertising and content partnerships — something that will likely invite more scrutiny as adoption scales globally.
The platform supports over 160 specialities and claims over 1,000 distinct disease areas in its scope. It has also introduced a feature called “Visits”, which supports real-time decision-making during clinical consultations.
It doesn’t try to replace clinical intuition or the back-and-forth of a multidisciplinary team. It doesn’t make decisions for you. It makes sure you’re working with the most current, most cited, and most respected information available.
Not a Disruption, a Shortcut
There’s a lot of AI talk in medicine. Usually, it revolves around prediction tools, risk calculators, or radiology annotation. OpenEvidence doesn’t do any of that. It answers clinical questions using the same journals we’re trained to trust — JAMA, NEJM, BMJ, and others. It draws from a corpus of over 35 million medical publications. The platform isn’t trying to reimagine medicine; it’s giving you back 20 minutes you would’ve spent searching.
You still make the call. But you’re making it with more context in less time.
A New Type of Consultation Room
There’s something efficient — and oddly private — about typing your uncertainty into a search bar, especially when it’s one built specifically for this profession. No risk of sounding out of touch. No need to pretend you recall a paper from five years ago. The tool delivers answers anchored in peer-reviewed citations, sometimes even including figures or multimedia if the journal permits.
In July 2024, OpenEvidence recorded 358,000 consultations across all users in one month. A year later, they claim to handle that volume in a single day. The curve isn’t linear. It reflects something more than just curiosity — it reflects need.
Peer-Reviewed, Not Open Web
The AI is trained on peer-reviewed content only. That alone makes it different from general-purpose chat models that hallucinate citations or deliver overly confident approximations. OpenEvidence has licensing agreements with both the JAMA Network and NEJM Group. Full-text access. Structured inputs. Reproducible outputs. It means you’re not reading summaries of blogs. You’re reading evidence, pulled from the original literature.
This has tangible outcomes. I’ve seen younger physicians use it to cross-check rare drug interactions. I’ve seen senior consultants use it to reinforce a decision before escalation. It fills gaps that previously went unacknowledged.
Built for the Front Line, Not the Lab
Daniel Nadler, the company’s CEO, doesn’t come from a clinical background. But the tool he’s helped build has clearly landed in a clinical setting with precision. The app is free for verified healthcare professionals in the U.S., supported through a business model that includes advertising. That brings questions, of course — about influence, about neutrality. Yet adoption continues to rise, suggesting that trust, at least for now, outweighs doubt.
The tool is now being used in real-time during patient visits. A recently launched feature, “Visits”, enables context-aware answers based on clinical workflows. The app claims to support over 160 specialities and 1,000 disease areas.
The Practical Impact on Patient Care
From the patient’s side, nothing looks different. But decisions may be happening faster. In urgent care, I’ve used it to validate a dosage adjustment based on new findings. In internal medicine, I’ve seen it reinforce treatment pathways that otherwise required late-night journal searches.
The time saved isn’t theoretical. It’s minutes that turn into availability, decisions that turn into action. Faster isn’t always better, but faster with better sourcing usually is.
Not a Revolution, Just Routine
The platform’s valuation rose from $3.5 billion to $6 billion in three months — a $200 million funding round in October 2025 pushed it there. The previous round in July was $210 million, led by GV (Google Ventures) and Kleiner Perkins. From a business angle, this is momentum. But from a practitioner’s perspective, it’s a reflection of real-world demand.
The valuation isn’t just a number. It’s a sign that what we use during rounds and consults is now considered infrastructure, not novelty.
Limits Still Exist
There are questions yet unanswered. [Unverified] We don’t know the long-term outcomes of using AI for clinical decision support. [Unverified] We don’t yet have peer-reviewed studies showing improved morbidity or mortality linked to the use of platforms like this. [Unverified] How ads are selected or targeted remains unclear.
What we do know is that the interface feels neutral. The answers are cited. The output is transparent. That counts for more in practice than theoretical debates about AI philosophy.
A Global Shift, Not a Local Trial
This isn’t a story about one health system. While OpenEvidence is currently U.S.-centric, the need it addresses isn’t. Colleagues in Singapore, Germany, and Brazil face the same challenge — navigating an overwhelming volume of medical research while making fast, high-stakes decisions.
Whether OpenEvidence expands globally is still an open question. [Unverified] There are no publicly confirmed trials in the NHS or EU health systems. But if a similar tool emerges locally with the same clinical grounding, uptake seems likely.
The point isn’t who builds it. It’s whether it works in the hands of those who need it.
What We’re Quietly Learning
This isn’t about AI changing medicine. It’s about AI quietly folding into the practice of medicine.
It starts as a time-saver, becomes a habit, and eventually reshapes how we absorb information. Not with bells and whistles. With speed, sourcing, and silence.
Not every tool earns that place. OpenEvidence, for now, has.