On Wednesday, 25 March 2026 publishers, researchers, and legal experts came together for a full-day workshop at GESIS – Leibniz Institute for the Social Sciences in Mannheim, to discuss the emerging policy and practical challenges of Large Language Model (LLM) tools in academic publishing.

Morning – Impulse Talks
- Publishers: Representatives from Elsevier (Inez van Korlaar), Wiley (Chris Mavergames) and Springer Nature (Henning Schoenenberger) shared their current policies, how they were establishedand the pressure to adapt them to the rapidly changing usage of AI-tools in the community.
- Academic institutions: Speakers from GESIS (Phillip Mayr), TU Berlin (Sonja Schimmler), IDS Mannheim (Jennifer Ecker) presented LLM tools and examples of their use in analysing scholarly articles. Patrick Brunner (FIZ Karlsruhe) dived into perspectives on copyright, data protection, and regulatory compliance.
Afternoon – Interactive break‑out sessions
In the breakout session “Here and now: What works, what needs work?”participants reviewed AI policy statements from publishers and funding agencies, identified helpful statements and collected ideas to turn identified pain points into useful guidance.In the parallel session “Dystopia or Utopia: Imagining the future in a press release” participants were invited to speculate about the future and create a dystopic and utopic vision of science and publishing in thext context of AI use in form of an imaginary press release issued in 2036.
Key Take‑aways
| Theme | Insight |
|---|---|
| Standardised guidelines | A strong demand for unified, cross‑publisher recommendations to avoid fragmented policies. |
| Tool guidance | Need for a “blacklist” of AI tools that are deemed unsafe or non‑compliant, as well as vetted alternatives. |
| AI‑assisted peer review | Practical how‑to advice is required for integrating LLMs into reviewer workflows without compromising quality or confidentiality. |
| Burden reduction | Clear, actionable guidance can ease the workload for authors and reviewers while maintaining rigor. |
| Opportunity‑focused approach | Emphasise the potential of AI to improve efficiency, discoverability and transparency when used responsibly. |
| Stakeholder‑specific recommendations | Tailored guidance for different groups (publishers, editors, researchers, libraries) is essential. |
Final Session and Next Steps
In a final plenary session, the attendees screened the landscape of AI-related guidelines for must-haves and missing pieces, as well as desired modifications and deletion of irrelevant policies. The group also identified the high number of existing guidelines as a main obstacle to easily comply to them, wishing for an interactive guide-for-guidelines and a better alignment of the various guidelines.
The results of the workshop shall be fed into the International Association of Scientific, Technical & Medical Publishers (STM) working group on AI policy and guidelines, enabling publishers to streamline their work towards a harmonisation of guidelines.
Last but not least, the organizers also collected suggestions for future topics in the LLMs in Scholarly Publishing workshop series. These included project proposals and a funders’ perspective as an so far underrepresented aspect, as well as the topic of research data publications.
Interested persons are encouraged to sign up for the mailing list of the LLM-publishing series: https://lists.nfdi.de/postorius/lists/llm-publishing.lists.nfdi.de/
Thanks
A heartfelt thank you to all speakers—Inez van Korlaar, Chris Mavergames, Henning Schoenenberger, Jennifer Ecker, Sonja Schimmler, Philipp Mayr‑Schlegel and Patrick Brunner—and to the participants who made the discussions lively and productive.
Special appreciation goes to GESIS for hosting the event and to the NFDI association and its consortia (NFDI‑MatWerk, KonsortSWD – NFDI4Society, NFDI4DS, Text+, NFDI4Earth, NFDI4Chem) for the organisation and financial support.
This recap was prepared with the assistance of RWTHgpt (model: OpenAI GPT‑OSS 120B). The workshop organisers retain full responsibility for the content.