In January 2026, GPTZero analyzed over 4,800 papers accepted at NeurIPS—one of the world’s most prestigious AI research conferences—and found more than 100 confirmed hallucinated citations spread across at least 51 papers. These weren’t fringe submissions from unknown researchers. Teams from Google, Harvard, Meta, and Cambridge were implicated. Every one of these papers had survived peer review by at least three expert reviewers, beaten out more than 15,000 rejected submissions, and been published in the conference’s official proceedings. And they cited research that doesn’t exist.
If that doesn’t alarm you as someone building or investing in a postsecondary institution, it should. Because the same pressures and tools driving fabricated citations at elite research conferences are filtering down to every level of academic work—from doctoral dissertations to undergraduate research projects to faculty publications at teaching-focused institutions. And if your institution doesn’t have a clear, enforceable policy on how AI can and cannot be used in scholarly work, you’re exposed to reputational, accreditation, and regulatory risk that’s growing by the month.
I’ve spent the last two decades helping founders launch postsecondary institutions, and in that time, research integrity has always been part of the conversation—but it’s never been this complicated. The introduction of generative AI into every stage of the research process has created a landscape where the tools that make scholarship more efficient also make fabrication easier, harder to detect, and more consequential when it’s discovered. Let me walk you through what’s happening, what the major publishers and regulators expect, and what your institution needs to do about it.
A quick note on scope: this post isn’t just for R1 research universities. If you’re founding a teaching-focused college, a trade school, or a career program, you still need to care about research integrity. Your faculty may publish scholarship as part of their professional development or tenure requirements. Your students may conduct capstone research. Your institutional effectiveness team produces data reports that accreditors rely on. And increasingly, programmatic accreditors in fields like health sciences, education, and business are asking specifically about how institutions handle AI in scholarly work. This conversation applies to you—just at a scale proportionate to your institution’s research profile.
The Scale of the Problem: What We Know in 2026
Let’s start with the data, because the numbers tell a story that’s hard to ignore.
Hallucinated Citations Are Exploding
A hallucinated citation is a reference that an AI system generates that looks legitimate—complete with plausible-sounding journal titles, author names, and publication dates—but points to a paper that doesn’t actually exist. Research published in peer-reviewed journals has documented that GPT-based systems fabricate 18–28% of citations when used for reference generation, with older models reaching false rates as high as 55%.
A Nature analysis published in early 2026 estimated that tens of thousands of publications from 2025 alone likely contain invalid AI-generated references. Alison Johnston, co-lead editor of the Review of International Political Economy, reported rejecting 25% of roughly 100 submissions in a single month because of fabricated references. Frontiers, a major open-access publisher, found that approximately 5% of submitted manuscripts showed potential reference-related issues flagged by their in-house AI integrity screening tool.
An analysis of nearly 18,000 papers accepted at three computer science conferences found that 2.6% of 2025 papers had at least one potentially hallucinated citation—up from 0.3% in 2024. That’s nearly a ninefold increase in a single year. And these are papers that passed peer review at top-tier venues. The problem is almost certainly worse in journals and conferences with less rigorous screening.
Earlier NeurIPS scans by GPTZero had already identified over 50 hallucinated citations in ICLR 2026 submissions as well, showing the problem isn’t confined to a single conference. The hallucinations ranged from completely fictional papers to real papers with fabricated co-authors added, altered titles, or invented DOIs. One paper contained 15 hallucinated citations, including incomplete arXiv IDs formatted as placeholders—a mistake no human researcher would plausibly make.
The Broader Integrity Crisis
Hallucinated citations are the most visible symptom, but they’re not the whole disease. AI is introducing integrity risks across the entire research lifecycle. Publisher Hindawi retracted more than 9,600 papers in 2023 alone—roughly 8,200 of which had co-authors from China—many linked to paper mills that increasingly use AI to generate content. ICLR saw a 70% increase in submissions for its 2026 conference, from 11,000 to nearly 20,000—a surge widely attributed to AI-assisted paper generation making it easier to produce publishable-looking manuscripts at scale.
Springer Nature retracted a machine learning textbook after independent checks revealed that two-thirds of sampled citations either did not exist or were materially inaccurate. Several researchers listed in the references confirmed they had never authored the works attributed to them. The Library of Virginia estimates that 15% of emailed reference questions it receives are now ChatGPT-generated, some containing hallucinated citations for both published works and unique primary source documents.
What does all of this mean for your institution? It means that the scholarly infrastructure—the system of citations, peer review, and publication that underpins academic credibility—is under strain. And every institution that conducts, publishes, or evaluates research has a stake in maintaining that infrastructure’s integrity.
What Major Publishers Require: The Current Policy Landscape
If your institution’s faculty or students publish research—and even at teaching-focused institutions, many faculty members do—they need to understand the rapidly evolving landscape of publisher AI policies. Here’s where the major publishers stood as of late 2025 and early 2026.
The pattern is clear and consistent across publishers. Three principles are universal: AI cannot be listed as an author (authorship requires accountability that only humans can provide), AI use must be disclosed transparently, and human authors bear full responsibility for the accuracy and integrity of all content—including any AI-assisted portions. The Committee on Publication Ethics (COPE), which sets standards followed by thousands of journals worldwide, has reinforced all three of these principles in its guidance.
Here’s what’s important for institutional leaders to understand: these aren’t suggestions. They’re requirements. A faculty member who submits a paper to Nature with undisclosed AI-generated text risks retraction, investigation, and reputational damage that reflects on your institution. A graduate student who uses AI to generate a literature review without proper disclosure and citation verification risks their degree, their advisor’s reputation, and your program’s credibility.
Publisher policies are evolving rapidly, but the direction is unambiguous: transparency is mandatory, accountability is non-delegable, and the era of treating AI-generated content as equivalent to human-authored content is over.
How AI Is Transforming—and Threatening—Every Stage of Research
To build effective institutional policy, you need to understand where AI intersects with research work. It’s not just writing—it’s the entire lifecycle.
Literature Review and Discovery
This is where most researchers first encounter AI in their workflow. Tools like Semantic Scholar, Elicit, Consensus, and general-purpose models like ChatGPT and Claude can synthesize large volumes of published literature in minutes rather than weeks. For researchers navigating a field with thousands of published papers, this is transformative.
The risk: AI tools generate plausible-sounding summaries of papers they haven’t actually read. They conflate findings from different studies, attribute claims to the wrong authors, and—as we’ve documented—sometimes fabricate papers entirely. A researcher who relies on AI for literature discovery without verifying every source against an actual database is building their scholarly argument on a foundation that may contain fictional supports. This isn’t a theoretical risk. It’s happening at the most elite levels of academic publishing right now.
Data Analysis and Interpretation
AI tools can process datasets, identify patterns, and generate statistical analyses faster than manual methods. For fields like genomics, climate science, and social science research with large datasets, this capability is genuinely valuable. But AI doesn’t understand the data it’s analyzing. It can find correlations that are statistically significant but scientifically meaningless. It can produce outputs that look rigorous but are based on inappropriate assumptions about the data structure.
I worked with a program director at a health sciences institution who discovered that a graduate student had used an AI tool to run a regression analysis on clinical data without understanding the underlying statistical assumptions. The AI produced clean, professional-looking output—but the analysis was fundamentally flawed because the tool had applied a parametric test to non-parametric data. The student couldn’t explain the methodology when questioned by the thesis committee. That’s not an integrity violation in the traditional sense—but it’s a competency failure that AI made possible by lowering the barrier to producing convincing-looking results.
Writing and Drafting
This is the most controversial area. AI can draft manuscripts, abstracts, methods sections, and discussion sections that read fluently and follow disciplinary conventions. Publishers have acknowledged that using AI for language editing and polishing is generally acceptable—particularly valuable for researchers whose first language isn’t English, a significant equity consideration. But using AI to generate substantive content without disclosure crosses the line from assistance to ghostwriting.
The AMEE Guide on AI disclosure, published in late 2025, proposed a practical framework for distinguishing between acceptable and unacceptable AI use in scholarly writing. The key principle: if AI shaped the substance of the argument—the ideas, the analysis, the conclusions—that use must be disclosed. If AI only polished the language, disclosure is generally not required (though some publishers disagree). The line is genuinely blurry, and your institutional policy needs to help faculty and students navigate it.
There’s an equity dimension here that deserves attention. For researchers whose first language isn’t English—and that includes a significant portion of the global research community—AI writing assistance can level a playing field that has historically disadvantaged non-native English speakers in the publication process. Several publishers have explicitly acknowledged this benefit. But even beneficial use requires disclosure, and the distinction between “polishing language” and “generation of substantive content” can be difficult to draw when a researcher uses AI to restructure an argument or rewrite a methods section for clarity. Your policy should acknowledge this nuance rather than treating all AI writing assistance as equivalent.
One pattern I’ve seen at multiple institutions: a faculty member asks an AI tool to “summarize the key findings” of their study, then uses that summary—with minimal modification—as their discussion section. The faculty member didn’t intend to commit ghostwriting. They saw AI as a drafting tool, similar to dictation software. But the result is a discussion section that wasn’t written by the author and doesn’t reflect the author’s analytical process. That’s a substantive integrity concern, even if the underlying research is sound. Training faculty to understand the difference between AI assistance and AI replacement in scholarly writing is one of the most important things your institution can do.
Peer Review
Here’s an area that doesn’t get enough institutional attention. A Frontiers survey of 1,645 researchers in 2025 found that peer reviewers are already using AI broadly in their review work. Early-career researchers showed 87% adoption rates. The potential benefits are real: AI can help reviewers check statistical methods, identify relevant prior work, and improve the clarity of their feedback.
But every major publisher has explicitly prohibited reviewers from uploading submitted manuscripts into AI tools. The reason is straightforward: peer review is confidential. A manuscript submitted for review is unpublished intellectual property. Uploading it to an AI system—which may store or train on the content—violates the confidentiality agreement that peer review is built on. Elsevier, Springer Nature, Wiley, and Taylor & Francis all have explicit prohibitions. Faculty at your institution who serve as peer reviewers need to understand these restrictions.
This creates a practical tension that your institutional policy should address. Reviewers are increasingly overwhelmed—NeurIPS received 21,575 submissions in 2025, up from under 10,000 in 2020. ICLR saw a 70% increase in submissions for its 2026 conference. The volume of manuscripts flowing through the peer review system is growing faster than the pool of qualified reviewers, which creates exactly the conditions under which overworked reviewers reach for AI assistance. Your institution’s research integrity training should explicitly address what reviewers can and cannot do with AI, and provide practical alternatives for managing review workload without violating confidentiality.
There’s also the problem of AI-generated peer reviews. Nature reported instances of ICLR reviewers producing feedback that appeared AI-generated—verbose, formulaic, and full of bullet points that didn’t engage meaningfully with the specific paper. This degrades the quality of peer review itself, which is the primary quality control mechanism for academic publishing. If your faculty are reviewing papers and using AI to generate their review text, they’re not fulfilling their scholarly responsibility—they’re delegating it to a machine that can’t exercise the expert judgment that peer review requires.
The Role of IRBs and Research Compliance in AI Governance
If your institution conducts research involving human subjects, your Institutional Review Board (IRB)—the committee that reviews and approves research protocols for ethical compliance—needs to address AI. This is new territory for most IRBs, and the guidance is still developing. But several issues are already clear.
Data Privacy and AI Tools
If a researcher uses an AI tool to analyze data that includes personally identifiable information (PII) or protected health information (PHI), that use needs to be disclosed in the IRB protocol. Many AI tools—particularly cloud-based services—process data on external servers, which raises questions about data security, consent, and compliance with FERPA (for education data), HIPAA (for health data), and applicable state privacy laws. Your IRB needs a framework for evaluating whether a proposed AI tool meets the data protection requirements specified in the research protocol.
Algorithmic Transparency and Reproducibility
A fundamental principle of ethical research is reproducibility—the ability for other researchers to replicate your methods and verify your results. When AI is involved in data analysis, coding, or interpretation, reproducibility requires disclosing exactly which tools were used, which parameters were set, and which prompts were given. Without this information, the research cannot be independently verified. The CONSORT-AI framework, developed for clinical trial research, provides a model for how AI use should be reported in research publications. Your institution’s research integrity policies should reference this or similar frameworks.
Consent and AI-Generated Data
If research participants are told their data will be analyzed using “statistical methods” and a researcher instead uses AI—which may process, store, or train on the data in ways participants didn’t anticipate—there’s a consent problem. IRB protocols need to specifically address whether and how AI will be used in data collection, analysis, and storage, and participants need to be informed accordingly. This is especially critical for research involving vulnerable populations, minors, or sensitive health information.
Several federal funding agencies are beginning to incorporate AI-related requirements into their grant oversight. While NIH and NSF haven’t yet issued formal mandates specific to AI in research, both agencies’ responsible conduct of research requirements are being interpreted to include AI transparency and data governance. If your institution’s faculty apply for federal grants, your IRB needs to be prepared for questions about how AI tools interact with funded research protocols. Getting ahead of this now—rather than scrambling to update your protocols after a grant reviewer flags a deficiency—is straightforward and inexpensive.
For institutions with clinical research programs, the stakes are even higher. If an AI tool processes patient data in a clinical study, HIPAA requirements layer on top of IRB obligations. The research team needs to demonstrate that the AI vendor’s data handling practices meet both HIPAA and IRB standards, that participants consented to AI processing of their data, and that the research protocol documents exactly how AI was used in data analysis. I’ve seen two clinical research programs delay studies by months because their AI tool vendor couldn’t provide adequate documentation of data security practices. Vetting AI vendors for research use is as important as vetting them for instructional use.
Building an Institutional Research Integrity Policy for AI
If you’re founding a new institution—particularly one with graduate programs, faculty research expectations, or clinical training that generates scholarly output—you need a research integrity policy that addresses AI. Here’s what that policy should cover.
Scope and Applicability
Define who the policy applies to (faculty, graduate students, undergraduate researchers, staff conducting institutional research) and what activities it covers (published scholarship, dissertations and theses, conference presentations, grant applications, institutional reports). Be specific: a policy that says “all researchers must use AI responsibly” without defining what “responsibly” means is unenforceable.
Consider the scope carefully. Your institutional research office—the team that produces outcomes data, program effectiveness studies, and accreditation evidence—is also conducting research that needs integrity standards. If your institutional research director uses AI to analyze student retention data and presents findings to the accreditor, those findings need to meet the same verification and documentation standards as published scholarship. I’ve seen institutions with robust policies for faculty publications but no standards at all for institutional research. That’s a gap that accreditors will notice.
Disclosure Requirements
Establish a clear disclosure standard for AI use in scholarly work. At minimum, require researchers to disclose the name of any AI tool used, the specific purpose of its use (language editing, data analysis, literature search, content generation), the extent of the researcher’s oversight and verification of AI outputs, and any limitations of the AI tool that may affect the research. This disclosure should appear in the methods or acknowledgments section of published work, and in a dedicated appendix for dissertations and theses. Align your standard with the disclosure requirements of the major publishers your faculty submit to.
Citation Verification
This is the most immediately actionable piece. Require all researchers to verify every citation in their work against an actual database (Google Scholar, PubMed, Web of Science, Scopus, or the source journal itself) before submission. This sounds basic—and it is—but the NeurIPS scandal demonstrates that even experienced researchers at top institutions are skipping this step when AI generates their reference lists.
Make citation verification an explicit requirement in your thesis and dissertation guidelines. Build it into your faculty research handbook. Consider requiring a signed statement from the corresponding author confirming that all citations have been independently verified. Some institutions have started using tools like Grounded AI’s reference checker or iThenticate’s citation matching capabilities to automate part of this process—but manual verification should remain the standard.
Prohibited Uses
Your policy should explicitly prohibit listing AI as an author or co-author on any scholarly work, using AI to fabricate data, results, or citations, uploading confidential peer review manuscripts into AI systems, using AI to generate substantive content without disclosure, and submitting AI-generated work as original scholarship. These prohibitions should be linked to your institution’s broader academic integrity code, with specified consequences for violations that are proportionate to the severity of the offense.
Graduate Student Training
Graduate students are particularly vulnerable to AI integrity risks. They’re under intense pressure to produce, they’re often less experienced in navigating the nuances of scholarly norms, and they’re the primary users of AI tools for research tasks. Your institution should require dedicated training on ethical AI use for all graduate students—not a one-time orientation module, but an ongoing component of their research methods coursework.
I’ve seen this work effectively when it’s embedded in the dissertation process itself. One institution I advise requires doctoral students to submit an “AI Use Statement” with their proposal, documenting which AI tools they plan to use and how, which AI tools they will avoid and why, and how they will verify all AI-assisted outputs. This statement is reviewed by the dissertation committee and updated at each milestone. It normalizes AI use while establishing clear boundaries and accountability.
The training itself doesn’t need to be a standalone course—though some institutions are building dedicated modules. More often, it works best when woven into existing research methods coursework. Add a unit on AI-assisted literature search that includes a hands-on exercise where students use an AI tool to generate a reference list, then verify every citation against Google Scholar or PubMed. When students discover for themselves that 20–30% of AI-generated citations don’t exist, the lesson sticks in a way that no policy document ever could.
Advisors and dissertation chairs need training too. A faculty advisor who doesn’t understand how AI tools generate references can’t effectively supervise a student who uses those tools. We’ve developed a 90-minute workshop for dissertation advisors that covers recognizing AI-generated content in student drafts, spot-checking citations efficiently, having productive conversations with students about appropriate AI use, and understanding publisher requirements that students will face when they submit for publication. Several institutions have made this workshop mandatory for all faculty who serve on dissertation committees. The feedback has been overwhelmingly positive—faculty consistently report that the training changed how they supervise student research.
What This Means for Different Types of Institutions
Not every institution has the same research profile, and your policy framework should be proportionate to your context.
Even if you’re launching a trade school with no research mandate, your faculty may publish, your accreditor may ask about scholarly integrity, and your institutional research function (outcomes data, program effectiveness studies) needs data integrity standards. Don’t assume this conversation doesn’t apply to you—it applies to every postsecondary institution, just at different scales.
There’s a cultural dimension here too. How your institution handles research integrity signals what kind of academic community you’re building. An institution that takes AI integrity seriously—that trains its people, verifies its data, and enforces its standards—attracts faculty and students who value rigor. An institution that treats integrity as a bureaucratic checkbox attracts people who treat it the same way. For a founder building an institution from scratch, you have the rare opportunity to establish a culture of integrity from day one rather than trying to retrofit it after problems emerge.
What We’re Seeing in Practice
The Graduate Program That Caught It Early
A small graduate program in education research that I advise implemented an AI Use Statement requirement for all dissertation proposals starting in fall 2025. In the first cohort, three of twelve students disclosed using AI for parts of their literature reviews. During committee review, one student’s citations were checked—and two of the 47 references didn’t exist. They were AI-generated fabrications that the student hadn’t caught.
The student wasn’t trying to cheat. She had used an AI tool to help organize her literature search and hadn’t realized that the tool had interpolated real paper details to create plausible-sounding but fictional references. Because the AI Use Statement process flagged her use of AI, the committee caught the problem before the proposal was approved. The student replaced the fabricated citations with verified sources, completed her proposal successfully, and later told me the experience fundamentally changed how she uses AI in her research—she now verifies every single reference against Google Scholar before including it.
The cost of implementing the AI Use Statement requirement? Essentially zero—it’s a form and a committee conversation. The cost of not catching those fabricated citations? Potentially a retracted dissertation, damaged program reputation, and an accreditation concern. This is the kind of low-cost, high-impact intervention that every institution with graduate programs should adopt immediately.
The Faculty Member Who Lost a Publication
At a mid-sized university, a tenure-track faculty member used AI to help draft portions of a journal article, including generating a preliminary reference list. The article was submitted to a respected journal, passed peer review, and was published. Three months later, a reader contacted the journal to report that two citations in the paper referenced articles that didn’t exist.
The journal initiated an investigation. The faculty member acknowledged using AI for reference generation but said he had “reviewed” the references without independently verifying each one. The journal issued a correction for the two fabricated citations but also flagged the article in its integrity database. The faculty member’s tenure case, which was pending, was delayed by six months while the institution’s research integrity officer reviewed the incident. Ultimately, the faculty member received tenure—but the incident was documented in his file, and his department chair told me privately that it would have been a different outcome if additional integrity issues had been found.
The takeaway for institutional leaders: one fabricated citation in a published paper can derail a career and create an institutional liability. Citation verification is not optional. It’s a professional obligation that your faculty handbook needs to make explicit.
The Trade School That Protected Its Outcomes Data
This example doesn’t involve traditional published research, but it’s equally important. A career college offering allied health programs used AI tools to analyze student outcomes data for its annual accreditation report. The institutional effectiveness director fed three years of graduate placement and salary data into an AI analytics platform and used the output to generate charts, narratives, and trend analyses for the report.
When we reviewed the report, we found that the AI tool had interpolated data points where records were incomplete, effectively generating estimated salaries for graduates whose actual employment status was unknown. The report presented these estimates as actual data. That’s not just sloppy—it’s a potential misrepresentation to the accreditor. If discovered during a site visit, it could have triggered an integrity investigation.
The fix was straightforward: we retrained the institutional effectiveness team on how to verify AI-generated analyses, required that all estimated or interpolated data be clearly labeled as such, and implemented a human review step before any AI-generated institutional research was submitted externally. Total cost: about $3,000 in staff training time. The cost of the alternative—an accreditation integrity finding—would have been immeasurably worse.
What It Costs to Get Research Integrity Right
Since this audience thinks in terms of budgets and ROI, here’s the cost picture.
Compare these costs to the expense of a single research integrity crisis. A retraction investigation at a mid-sized institution typically costs $15,000–$50,000 in staff time, legal consultation, and administrative overhead. If the retraction generates media attention or accreditation scrutiny, the indirect costs—reputational damage, enrollment impact, delayed accreditation—can be orders of magnitude higher. Proactive research integrity governance is one of the highest-ROI investments an institution can make.
Accreditation and Research Integrity: What Evaluators Will Look For
If your institution has any research component—even faculty scholarship at a teaching-focused school—accreditors will evaluate your research integrity frameworks as part of their review.
SACSCOC (Southern Association of Colleges and Schools Commission on Colleges) evaluates whether institutions maintain integrity in research and scholarly work as part of its Principles of Accreditation. For institutions with graduate programs, this includes dissertation oversight, faculty publication expectations, and institutional research governance.
HLC (Higher Learning Commission) assesses whether institutions demonstrate a commitment to intellectual inquiry and ethical conduct in research. For institutions seeking to add graduate programs or research activities, HLC will want to see policies that address current challenges—and AI is the current challenge.
Programmatic accreditors in fields like health sciences (CAAHEP, ABHES), counseling (CACREP), and education (CAEP) may have additional requirements around research ethics and scholarly integrity. If your program prepares students for professions where research literacy is a competency requirement, your AI research integrity policy is part of your accreditation evidence.
My advice: include your AI research integrity policy in your accreditation self-study or application. Reference specific publisher guidelines you’ve aligned with. Describe your citation verification procedures. Document your graduate student training. This positions you as an institution that takes integrity seriously—which is exactly what accreditors want to see.
Key Takeaways
Key Takeaways
1. AI-generated hallucinated citations are a rapidly growing crisis in academic publishing. An estimated tens of thousands of 2025 publications contain fabricated references, and the rate is accelerating.
2. Every major academic publisher—Elsevier, Springer Nature, Wiley, Taylor & Francis, SAGE—prohibits AI authorship, requires disclosure of AI use, and holds human authors fully accountable for all content.
3. Peer reviewers are prohibited by every major publisher from uploading manuscripts into AI tools. Faculty who review for journals need to understand and comply with these requirements.
4. AI integrity risks span the entire research lifecycle: literature review, data analysis, writing, peer review, and even citation management.
5. Institutional research integrity policies must address AI explicitly—including disclosure requirements, citation verification procedures, prohibited uses, and graduate student training.
6. Citation verification against actual databases (Google Scholar, PubMed, Scopus) must be a non-negotiable requirement for all scholarly work at your institution.
7. Graduate students are particularly vulnerable. Require AI Use Statements for all theses and dissertations, reviewed by the student’s committee.
8. IRBs need AI-specific protocols for research involving human subjects data processed by AI tools.
9. Even institutions without a research mandate need basic AI integrity standards for faculty scholarship, institutional research, and student capstone projects.
10. The cost of prevention—policy development, citation verification training, graduate student education—is negligible compared to the cost of a retraction, accreditation concern, or reputational crisis.
Frequently Asked Questions
Q: How widespread is the problem of hallucinated citations in academic publishing?
A: It’s significant and growing rapidly. Research indicates that AI-generated references have a fabrication rate of 18–28% (with older models reaching 55%). An analysis of NeurIPS 2025 papers found that 2.6% had at least one hallucinated citation—up from 0.3% the previous year. A Nature investigation estimated that tens of thousands of 2025 publications likely contain AI-generated invalid references. The problem is most acute in fields with high publication pressure and rapid AI adoption, but it’s spreading across all disciplines.
Q: Can faculty use AI to help write journal articles?
A: Yes, with significant caveats. All major publishers permit AI use for language editing and polishing but require transparent disclosure of how AI was used. AI cannot be listed as an author, and human authors bear full responsibility for accuracy, originality, and integrity. If AI contributed to substantive content—analysis, argumentation, methodology—that use must be disclosed in the Methods or Acknowledgements section. Basic grammar and spell-checking generally doesn’t require disclosure. When in doubt, disclose.
Q: What should our institutional policy say about AI in graduate research?
A: At minimum, require all graduate students to complete training on ethical AI use in research, submit an AI Use Statement with their thesis or dissertation proposal, verify all citations against actual databases before submission, and disclose any AI tool use in their final document. Align your requirements with the disclosure standards of major publishers in your field. Make citation verification a graded component of research methods courses, not just a policy statement in a handbook.
Q: Are there tools that can detect hallucinated citations?
A: Several tools are emerging. GPTZero offers a hallucination checker that scans papers and verifies citations against academic databases. Grounded AI provides reference verification services to publishers. Frontiers has developed an in-house AI tool for flagging reference-related issues at the point of submission. iThenticate’s plagiarism detection platform can identify partial or suspicious citation matches. However, no automated tool catches everything—manual verification against Google Scholar, PubMed, or the source journal remains the gold standard.
Q: How does this affect our accreditation?
A: Accreditors evaluate research integrity as part of institutional effectiveness and academic quality. If your institution produces or supports scholarly work (faculty publications, student research, institutional research), having an AI-specific research integrity policy strengthens your accreditation position. Not having one creates a gap that evaluators may flag, particularly as AI integrity concerns become more prominent in higher education discourse. Include your policy and training documentation in your accreditation file.
Q: What about AI use in undergraduate research and capstone projects?
A: The same principles apply at an appropriate scale. Undergraduates conducting research or completing capstone projects should be required to disclose AI tool use, verify all citations, and demonstrate that they understand the limitations of AI-assisted research. This is both an integrity requirement and a learning objective—students who learn to use AI critically and transparently in their capstone work are better prepared for graduate school and professional environments where these skills are increasingly expected.
Q: Can a faculty member lose tenure over AI-related research integrity issues?
A: Potentially, yes. Research integrity violations—including fabricated citations, undisclosed AI use that constitutes ghostwriting, or AI-generated data—are grounds for investigation under most institutional research integrity policies. Depending on the severity and the institution’s policies, consequences can range from a formal letter of reprimand to denial of tenure to termination. Even when the integrity violation is unintentional (as with hallucinated citations the researcher didn’t catch), the incident is typically documented and can affect promotion decisions.
Q: Should we ban AI from research entirely?
A: No. That would be counterproductive and probably unenforceable. AI tools offer genuine benefits for research—literature discovery, language editing, data analysis assistance, and administrative efficiency. The goal is responsible, transparent use with appropriate verification and oversight. A blanket ban pushes AI use underground, where it’s harder to govern and more likely to produce integrity problems. Governance is better than prohibition.
Q: How do IRB protocols need to change for AI?
A: IRBs should require researchers to disclose any AI tools used in data collection, analysis, or interpretation as part of their protocol submissions. If AI tools process personally identifiable or protected health information, the protocol should address where data is processed, how it’s stored, and whether the AI vendor’s terms of service allow data retention or model training. IRBs should also evaluate whether participant consent forms adequately describe AI’s role in the research. These aren’t radical changes—they’re extensions of existing data governance principles to a new category of tools.
Q: What’s the cost of implementing an AI research integrity framework?
A: Very low, especially compared to the risks. Policy development costs $5,000–$10,000 if you use external consulting (much of which can be folded into your broader AI governance work). Graduate student training can be embedded in existing research methods courses at no additional cost. Citation verification tools range from free (Google Scholar manual checks) to $2,000–$5,000 annually for institutional licenses. The total investment for most institutions is $5,000–$15,000—a fraction of the cost of a single retraction or integrity investigation.
Q: How often should we update our AI research integrity policy?
A: At minimum, annually. Publisher policies are evolving rapidly—what was acceptable in 2025 may be prohibited in 2027. Federal guidance on AI in research (from agencies like NIH, NSF, and the Department of Education) is also developing. Designate a faculty member or committee to monitor publisher policy changes and federal guidance updates, and build a mechanism for interim policy revisions when significant changes occur.
Q: Are there grants that support research integrity infrastructure?
A: Yes. NSF and NIH both fund responsible conduct of research training, which can include AI integrity components. The Department of Education’s FIPSE program ($169 million for responsible AI integration) may fund research integrity policy development and training as part of a broader AI governance proposal. State-level workforce and education grants may also support institutional AI governance initiatives. Frame your research integrity work as part of your institution’s overall responsible AI strategy.
Q: What role should faculty governance play in AI research integrity policy?
A: A central one. Research integrity is fundamentally an academic matter, and your faculty—through a faculty senate, academic council, or research committee—should be involved in drafting, reviewing, and updating AI research integrity policies. This isn’t just good governance; it’s practical. Faculty who participate in developing the policy are far more likely to comply with it and to hold their students accountable. Document the governance process—accreditors want to see that research integrity policies are products of genuine institutional deliberation, not top-down mandates.
Glossary of Key Terms
The intersection of AI and research integrity isn’t a future problem—it’s a current crisis that’s reshaping scholarly publishing in real time. Institutions that build clear policies, train their researchers, and enforce citation verification standards will protect their scholarly reputation and strengthen their accreditation position. Those that wait will inevitably face the consequences: retracted papers, embarrassed faculty, skeptical accreditors, and a credibility gap that’s much harder to close than it was to prevent.
The good news? The interventions are low-cost and high-impact. A policy document, a training module, a citation verification requirement, and a culture of transparency—that’s all it takes to get ahead of this. Start now, because the volume of AI-assisted scholarly work is only going up.
Current as of April 2026. Regulatory guidance, publisher policies, and technology platforms evolve rapidly. Consult current sources and expert advisors before making institutional decisions.
If you’re ready to explore how EEC can de-risk your AI-integrated launch, reach out at sandra@experteduconsult.com or +1 (925) 208-9037.






