q.e.d: An AI Tool for Smarter Manuscript Review

A team of researchers developed q.e.d, an AI-powered system designed to deliver rigorous and constructive feedback on scientific manuscripts in minutes.

Written byLaura Tran, PhD
| 4 min read
Image of a collage with human hands working with desktop icons in a collage style.
Register for free to listen to this article
Listen with Speechify
0:00
4:00
Share

For Oded Rechavi, a molecular biologist at Tel Aviv University, academia is an anomaly in the universe in that it can provide opportunities for people to pursue creative science and chase scientific curiosities. However, he remarked, “The problem is that the fun ends when you need to publish it and go through the [review] process.”

While Rechavi emphasized that peer review and feedback are crucial, he was frustrated with the current state of this process. For many researchers, the peer-review system and the relentless “publish or perish” mentality can be exhausting. Along with an influx of manuscripts, the pace can be excruciatingly slow, with months or even years separating a preprint from its final publication. The process is also inconsistent for reviewers. There are only so many experts available to properly evaluate a study and provide meaningful feedback, while others may have conflicts of interest or biases. This dynamic can pressure authors to cater to reviewers’ preferences, potentially compromising the integrity of their research. He added, “I think most academics, specifically biologists, would agree that the current state of affairs is far from ideal.”


Image of Oded Rechavi.

Oded Rechavi, along with his colleagues, built q.e.d as an AI tool for critical review of scientific manuscripts.

Chen Gallili

Two years ago, Rechavi pondered if there was a way to reimagine peer review and scientific publishing with artificial intelligence (AI). His idea? To build a powerful AI reviewer that provides users with constructive feedback. Together, he and a team of scientists from various backgrounds including AI, engineering, and biology created q.e.d. Its name is derived from quod erat demonstrandom, a Latin phrase meaning “which was to be demonstrated,” which is typically signed at the end of mathematical proofs and philosophical arguments. Though not a peer, q.e.d is an online tool that aims to help scientists improve their research before submission.

With q.e.d, users simply upload their manuscript or even an early draft to q.e.d’s website. Within 30 minutes, they receive a report that breaks the research down into what Rechavi described as a “claim tree.” The AI identifies the claims made within the work, examines the logical connections between them, pinpoints strengths and weaknesses, and suggests both experimental and textual edits.

During its early stages, the q.e.d team reached out to different researchers who were not involved in its development to participate in a test group in August of 2025. Michał Turek, a biologist at the Institute of Biochemistry and Biophysics of the Polish Academy of Sciences, was one of these beta testers.

Despite large language models (LLMs) having a reputation for sometimes generating hallucinations—inaccurate outputs such as fake citations—he was pleasantly surprised by q.e.d. Turek tested a manuscript he was working on at the time and noted that “it gave pretty accurate suggestions on what you should do to support your claim.” Another feature of q.e.d that Turek highlighted was its ability to show where one’s research is positioned within the current state of knowledge—whether it is novel or already known—which “other language models are not doing reliably.”

Since its launch in October 2025, researchers from more than 1,000 institutions worldwide have tried q.e.d, and it has garnered much buzz amongst the scientific community. Many of whom, like Maria Elena de Bellard, a neurodevelopmental biologist at California State University, Northridge, first came across q.e.d through Rechavi’s social media post on X.

Continue reading below...

Like this story? Sign up for FREE Newsletter updates:

Latest science news storiesTopic-tailored resources and eventsCustomized newsletter content
Subscribe

Curious about the tool, she uploaded two manuscripts she was working on. “I am enamored with q.e.d,” she said. “ChatGPT will think for me, but q.e.d makes me think.” For example, in q.e.d’s report on her manuscript, it flagged a portion of her experimental work as something that has been done before, which she cross-referenced with another AI tool called Consensus.

De Bellard called q.e.d “an incredible intellectual resource,” and remarked that she now uses it routinely to test whether her proposed experiments truly answer her research questions, save time in receiving and implementing feedback, and to refine her grant proposals.

As a non-native English speaker, de Bellard remarked that she has received comments focused on her English writing, rather than the science she presented. “It can feel disheartening…[but] q.e.d doesn’t care.” Regardless of the user’s institute, she added that q.e.d “just focuses on the science, and that is how scientific review should be.”

However, Mark Hanson, an immunologist and evolutionary biologist at the University of Exeter, was not as enthusiastic. He wanted to see if q.e.d lived up to the hype and conducted a mini experiment by uploading his previously published paper into q.e.d and Refine, another AI peer-review agent.

While Hanson was not impressed with Refine, he said that “q.e.d is doing something quite interesting in the power of how it is able to digest information…With its training, it managed to identify a real suggested [genetic rescue] experiment for one of the papers that I uploaded.”

This aspect may be useful for new and seasoned researchers alike. Students and those new to their field of study may not have the expertise of putting a manuscript together and may benefit from seeing what the tool suggests as experimental gold standards. “[q.e.d] is a really good way to dip your toes in,” said Hanson.

However, he also noted that the suggestions were not original and did not offer fresh insights to help strengthen his research. Instead, “[The] AI does a great job of being an average critical thinker.” While q.e.d can broadly identify gaps in research at a rapid turnaround time, for Hanson, the AI is only as good as the data it is trained on.

Since q.e.d is in its infancy, Rechavi encourages users to continue providing feedback to help refine the platform and looks forward to adding new features, such as link uploads, grant review tools, and more. Eventually, Rechavi envisions q.e.d users will “share the review that you get with the world to show that you are transparent, open to criticism, and willing to improve.” En route to see this, on November 6, 2025, Rechavi announced a collaboration with q.e.d and openRxiv. Now, researchers can run their work through q.e.d before submitting their preprints on the site. Then, Rechavi plans to have the system track q.e.d report and show how researchers improved their work based on its feedback.

Not only do these researchers see the potential in q.e.d’s role as a tool for new and seasoned researchers to review their work, but Turek hopes it can also be expanded to serve a more general audience in fact checking claims that refer to scientific literature. “Each scientist should at least try it and see whether it is [a good tool] for him or her,” said Turek.

Related Topics

Meet the Author

  • Laura Tran, PhD

    Laura Tran is an Assistant Editor at The Scientist. She has a background in microbiology. Laura earned her PhD in integrated biomedical sciences from Rush University, studying how circadian rhythms and alcohol impact the gut. While completing her studies, she wrote for the Chicago Council on Science and Technology and participated in ComSciCon Chicago in 2022. In 2023, Laura became a science communication fellow with OMSI, continuing her passion for accessible science storytelling.

    View Full Profile
Share
You might also be interested in...
Loading Next Article...
You might also be interested in...
Loading Next Article...
Illustration of a developing fetus surrounded by a clear fluid with a subtle yellow tinge, representing amniotic fluid.
January 2026, Issue 1

What Is the Amniotic Fluid Composed of?

The liquid world of fetal development provides a rich source of nutrition and protection tailored to meet the needs of the growing fetus.

View this Issue
Redefining Immunology Through Advanced Technologies

Redefining Immunology Through Advanced Technologies

Ensuring Regulatory Compliance in AAV Manufacturing with Analytical Ultracentrifugation

Ensuring Regulatory Compliance in AAV Manufacturing with Analytical Ultracentrifugation

Beckman Coulter Logo
Skip the Wait for Protein Stability Data with Aunty

Skip the Wait for Protein Stability Data with Aunty

Unchained Labs
Graphic of three DNA helices in various colors

An Automated DNA-to-Data Framework for Production-Scale Sequencing

illumina

Products

nuclera logo

Nuclera eProtein Discovery System installed at leading Universities in Taiwan

Brandtech Logo

BRANDTECH Scientific Introduces the Transferpette® pro Micropipette: A New Twist on Comfort and Control

Biotium Logo

Biotium Launches GlycoLiner™ Cell Surface Glycoprotein Labeling Kits for Rapid and Selective Cell Surface Imaging

Colorful abstract spiral dot pattern on a black background

Thermo Scientific X and S Series General Purpose Centrifuges

Thermo Fisher Logo