AI Screening Flags More Than 1,000 Potentially Problematic Open-Access Journals

AI Screening Flags More Than 1,000 Potentially Problematic Open-Access Journals

0 Comments Andre Okoye

5 Minutes

Researchers using an artificial-intelligence (AI) screening tool have identified over 1,000 open-access journals that show signs of questionable publishing practices after scanning roughly 15,000 titles. The study, published in Science Advances on 27 August, proposes that automated screening could help indexers, publishers and research institutions identify journals that charge fees but lack robust peer review and editorial oversight — practices often associated with so-called predatory or low-quality journals.

How the AI screening tool works

Data sources and training

The team trained their model on two labelled sets from the Directory of Open Access Journals (DOAJ): 12,869 journals considered legitimate and 2,536 titles DOAJ had flagged for violating quality standards. The researchers then ran the model against 15,191 open-access journals listed in the Unpaywall public database.

Signals and red flags

The AI examines multiple signals drawn from journal websites and the articles they publish. These include publishing turnaround times (very rapid publication can indicate inadequate peer review), unusually high rates of self-citation, opacity around licensing and article-processing charges, and the institutional affiliations of editorial-board members. Several of the model’s criteria derive from DOAJ best-practice guidance for open-access publishing.

The screening identified 1,437 journals as questionable. The flagged group had not previously appeared on established watchlists, and some titles are even owned by larger, reputable publishers. Collectively, these journals have published hundreds of thousands of papers that have already accrued millions of citations, raising concerns about the visibility and influence of low-quality literature in the scholarly record.

Findings, limitations and implications

The study’s results demonstrate both the promise and the limitations of automated screening. The team estimated about 345 false positives among the flagged journals — examples included discontinued titles, book-series entries and small society journals that do not meet the model’s expectations. Conversely, error-rate estimates suggest the tool missed an additional 1,782 questionable journals.

Daniel Acuña, a computer scientist at the University of Colorado Boulder and co-author of the study, stressed that the AI is not intended as a final arbiter. The tool is available as a closed beta for organizations that index journals or manage publishing portfolios, but "a human expert should be part of the vetting process" before any corrective action, he says. The model is best positioned as a triage mechanism that can prioritize journals for deeper human review.

Cenyu Shen, DOAJ’s deputy head of editorial quality, notes that problematic journals are increasing in number and sophistication: "We are observing more instances where questionable publishers acquire legitimate journals, or where paper mills purchase journals to publish low-quality work." Paper mills are commercial operations that produce fabricated manuscripts and fraudulent authorships, exacerbating threats to research integrity.

DOAJ’s internal checks remain largely manual and complaint-driven. In 2024 the directory investigated 473 journals — a 40% increase since 2021 — and spent nearly 837 hours on investigations, up about 30%. AI-assisted screening could reduce that workload by highlighting high-risk cases for focused human assessment.

Expert Insight

Dr. Elena Morales, Research Integrity Officer at the Centre for Scholarly Communication (fictional), comments: "Automated screening is a valuable new tool for maintaining the integrity of the scholarly record, but it cannot replace expert judgment. AI can comb large datasets quickly and surface patterns humans might miss, such as systemic self-citation or sudden editorial changes. The next step is combining these signals with transparent metadata reporting from publishers so indexers can make responsible decisions rooted in both data and domain expertise."

Practical steps for stakeholders include improving transparency around editorial processes and fees, sharing metadata with indexing services, and adopting routine AI-assisted audits followed by independent human review. Researchers should also verify journal practices before submitting work and consult DOAJ or other trusted indices.

Conclusion

The AI screening approach offers a scalable method to flag potentially problematic open-access journals and prioritize them for human review. While the tool uncovered more than 1,000 questionable titles that had not been previously identified, its false positives and false negatives underscore the need for combined human–AI workflows. Strengthening metadata transparency, refining detection models and fostering collaboration between indexing services, publishers and research institutions will be essential to protect the integrity of academic publishing and the citations that shape scientific consensus.

"My name’s Andre. Whether it's black holes, Mars missions, or quantum weirdness — I’m here to turn complex science into stories worth reading."

Comments

Leave a Comment