Wiley report shows researcher confidence plummeting even as AI adoption surges
By nature, scientists approach claims with healthy skepticism—it’s fundamental to their work. Yet their growing wariness toward artificial intelligence reveals a troubling pattern: familiarity may be breeding contempt.
As explained here, academic publisher Wiley’s preliminary 2025 research impact report unveils a striking contradiction. As AI systems have grown more sophisticated, scientific confidence in them has actually diminished compared to 2024 levels.
The data tells a revealing story. Concern about AI “hallucinations“—instances where large language models confidently present false information as truth—jumped from 51 percent to 64 percent year-over-year. This happened even as researcher adoption of AI climbed from 45 to 62 percent, suggesting scientists are simultaneously embracing and doubting these tools.
Security and privacy worries increased by 11 percentage points, while ethical concerns and demands for transparency also rose. Perhaps most telling is the collapse in optimism: last year, scientists believed AI exceeded human performance in more than half of applications. This year, that figure plummeted to under one-third.
This trend aligns with earlier studies showing an inverse relationship between AI knowledge and trust. The more people understand how these systems function, the more skeptical they become—while enthusiastic supporters often possess the least technical understanding.
The reasoning behind professional skepticism isn’t mysterious. Hallucinations represent a fundamental problem that has already disrupted legal proceedings, medical decisions, and travel planning. Troublingly, testing from May revealed that these fabrications persisted—or even increased—as models grew more powerful.
There’s an uncomfortable economic dimension as well. Users consistently prefer AI systems that respond with unwavering confidence over those that acknowledge uncertainty or data gaps, even when that confidence is misplaced. This creates perverse incentives: eliminating hallucinations might actually drive away customers.
For anyone navigating the AI hype cycle, scientists offer a valuable reality check. Their professional experience with these tools has made them increasingly cautious—a perspective worth considering before accepting bold claims at face value.
A wake-up call worth heeding
The growing skepticism among scientists toward AI technology sends a clear message: those working closest with these tools understand their limitations best. This isn’t mere technophobia or resistance to innovation—it’s informed caution born from direct experience.
Yet this skepticism may be exactly what we need. Scientists occupy a unique position as society’s early warning system for AI’s most dangerous flaw: hallucinations that ordinary users cannot detect. When AI confidently presents fabricated information as fact, most people lack the expertise to challenge it. They trust the answer. It sounds authoritative because it aligns with their existing beliefs or simply because they have no way to verify it.
This problem grows more insidious when we consider AI’s tendency toward agreement. These systems are designed to be helpful and accommodating, often validating user assumptions even when those assumptions are wrong. Without experts raising red flags, we risk drifting toward a future where false information becomes increasingly difficult to distinguish from truth. In this world, AI not only reflects our biases but also actively reinforces them.
As AI continues to permeate every sector, from healthcare to education, from legal systems to daily consumer applications, the scientific community’s declining confidence should give us pause. These are professionals trained to evaluate evidence, identify flaws, and demand rigor. When they collectively step back from the hype, they’re not just protecting their own work—they’re sounding an alarm for everyone who will eventually depend on these systems.
The challenge ahead isn’t simply technical. It’s about building AI systems that prioritize accuracy over confidence, transparency over convenience, and truthfulness over user engagement. Until the industry addresses fundamental issues like hallucinations and the perverse incentives that perpetuate them, we need scientists’ skepticism as a counterweight to uncritical adoption.
For now, the data suggests a simple rule: before accepting AI’s promises at face value, consult someone who actually uses it professionally. Their measured skepticism isn’t pessimism—it’s wisdom earned through experience. And in an era where AI increasingly shapes what we believe to be true, that kind of grounded perspective isn’t just valuable—it may be essential to preserving our grasp on reality itself.
Scientists’ Trust in AI: 2024 vs 2025
Wiley’s Research Impact Report Reveals Growing Skepticism
The Knowledge Paradox
Research shows an inverse relationship between AI understanding and trust. Those who know more about how AI works tend to trust it less, while enthusiastic supporters often have the least technical knowledge.
The Hallucination Problem
AI systems confidently present false information as fact, disrupting legal proceedings, medical decisions, and travel planning. Testing shows these fabrications persist even as models become more powerful.
The Bottom Line
As scientists gain more experience with AI tools, their skepticism grows. Their professional caution offers a valuable reality check amid widespread AI hype.

