Experts from various disciplines discussed the use of AI in scientific operations.

On 11 February 2025, more than 60 experts from science, publishing houses and start-ups met at the DIPF in Frankfurt to discuss the use of generative AI in science at the workshop “Large Language Models and the Future of Scientific Publishing”. The joint event of the NFDI consortia KonsortSWD, NFDI4Chem, NFDI4DataScience, NFDI4Earth, NFDI-MatWerk and Text+ created a rare platform for dialogue between disciplines – from the social sciences to materials science. This enabled multi-perspective discussions that transcended institutional and disciplinary boundaries.
The event kicked off with two contrasting keynotes: Prof Sandra Geisler from RWTH Aachen University demonstrated concrete LLM applications along the research data cycle in “Prompt or Perish” – from ontology selection to automated Jupyter notebook creation. Her appeal: despite efficiency gains, critical review remains essential, especially for models that are prone to hallucination. In “Beyond the Hype”, Markus Kaindl from the Springer Nature publishing group outlined a future in which AI takes over first drafts of publications and researchers concentrate on core competences. His outlook on cost-effective small language models and AI-supported trend analyses for publishers was often taken up in the discussion.
Ambivalent views emerged in in-depth sessions: on the one hand, LLMs enable automated plagiarism checks, metadata completion and more efficient peer review management. On the other hand, AI-generated content raises copyright issues, the review of AI-generated proposals takes time and the energy consumption of large models raises environmental concerns. Moderator Wendy Patterson from the Beilstein Institute also addressed issues relating to regulations and guidelines regarding the use of AI-supported tools in the publishing landscape. The increase in “synthetic” publications – from machine-inflated text passages to difficult-to-understand AI-generated data sets – was a particularly controversial topic of discussion among participants.
In the final discussion moderated by Philippe Genêt (German National Library), controversial positions were condensed into a central leitmotif: AI tools must be understood as supporting instruments that complement – not replace – human expertise. The debate revealed a field of tension between technological potential and social responsibility.
The productive networking of different stakeholders proved to be a success of the event: start-ups presented AI solutions, e.g. for automatic abstract generation, publishers presented quality assurance systems, researchers shared their experiences with prompt engineering as well as the general use of AI. The fact that six NFDI consortia worked together on this topic emphasises the relevance of the topic in the research context and for research infrastructures. The lively debates – from the handling of synthetic data to the reform of the peer review system – mark the beginning of a more intensive examination of LLM in the NFDI.
This guest article was written by Bernhard Miller from KonsortSWD.