Harnessing the potential of AI for academic research
Four colleagues share their work and hopes for the future

Paolo Lombardi, Director of Technology Innovation, Taylor & Francis
Paolo Lombardi, Director of Technology Innovation, Taylor & Francis
A selection of recent key titles on AI Ethics from across CRC Press and Routledge
A selection of recent key titles on AI Ethics from across CRC Press and Routledge
Olga Tchivikova, Director of Operations, Researcher Services, Taylor & Francis
Olga Tchivikova, Director of Operations, Researcher Services, Taylor & Francis
Sunil Nair, Global Editorial Director for Science and Medicine, Advanced Learning, Taylor & Francis
Sunil Nair, Global Editorial Director for Science and Medicine, Advanced Learning, Taylor & Francis
In Deloitte’s "State of Generative AI in the Enterprise," a 2024 study of over 2,700 executives in the business community, "two of three surveyed organizations said they are increasing their investments in Generative AI."
However, "nearly 70% of respondents said their organization has moved 30% or fewer of their Generative AI experiments into production."
A 2024 Oxford University Press study also finds mixed results on the adoption of AI in the academic community - "over three-quarters (76%) of researchers report using some form of AI tool in their research at present," however just 27% have a "good understanding" of AI tools.
There's a lack of comprehension and, in some cases, anxiety occurring, even as we're steadily absorbing AI into our daily lives. However, in this confusion there is a strong sense of optimism. Elsevier recently surveyed nearly 3,000 researchers and clinicians, 95% of whom said they believe AI will "accelerate knowledge discovery." At Taylor & Francis, we believe this too.
Our role as a publisher is to help authors’ ideas make their fullest contribution to society. AI has the potential to transform publishing by enhancing the quality, efficiency, and accessibility of scholarly content. We are at the forefront of that transformation, and are partnering with AI developers to shape the direction of travel. Our ultimate goal is to successfully navigate these new technologies while mitigating their risks, in order to empower researchers to leverage this powerful technology.
I recently had the privilege of speaking with four Taylor & Francis colleagues who are working very hard to fulfill AI's potential for our customers. They include Paolo Lombardi, Director of Technology Innovation; Olga Tchivikova, Director of Operations, Researcher Services; Sunil Nair, Global Editorial Director for Science and Medicine, Advanced Learning; and Leon Heward-Mills, Managing Director of Researcher Services. I asked each of them to explain what they do in their day-to-day lives to harness the potential of AI at Taylor & Francis.
Paolo Lombardi, Director of Technology Innovation, Taylor & Francis
My role spans testing new technology, developing demos and prototypes, and advising our Executive Leadership, Legal, Operational Efficiency, and HR teams on AI adoption and best practices.
Since what I like to call "hurricane ChatGPT" arrived in early December 2022, Taylor & Francis’ awareness of AI and our approach have changed significantly. As a member of a cross-functional AI Task Force, I have helped shape our response to the opportunities and challenges of Generative AI, including our AI Policy in early 2023 and its review in spring 2024, guidelines for authors and editors, adoption of AI technologies like Microsoft Copilot, and relationships with AI R&D clients and partners.
Training LLMs on trusted, substantiated knowledge
We are living at a point in time where there are vast amounts of mis- and disinformation feeding public discourse, some of this created with Generative AI tools. As a publisher, it is our role to curate content that can be trusted, and for that reason, we have come to the conclusion that it is important that LLMs (large language models) are trained on trusted, substantiated academic knowledge.
A big project I am working on right now is data quality for Generative AI Research and Development. Taylor & Francis is a content-rich company and, in that sense, an ideal setting for Generative AI R&D. However, Generative AI, a branch of Machine Learning, is not built using traditional rule-based programming but instead relies on examples and trial-and-error methods, which require large datasets.
Taylor & Francis’ 4 million-plus peer-reviewed articles and 190,000-plus advanced learning books contain high-quality text, filled with rare keywords and complex concepts, providing invaluable variability and depth beyond what public datasets like the internet or social media offer. However, data quality has posed a significant challenge.
Our workflows expertly prepare content for print and human use, not for machine processing. For instance, print-ready PDFs are central to our processes, yet PDFs are unsuitable 'as is' for AI. PDFs require transformations like OCR (Optical Character Recognition), which can introduce errors such as misspellings, incorrect formatting, or disrupted section order. Backlist books, in particular, need extensive processing before they can be 'ingested' by an AI system, though they contain foundational knowledge often absent in more recent publications.
Another challenge is that modern Generative AI models—like OpenAI’s GPT or Google’s Gemini—are inherently unable to fact-check or correctly attribute sources (e.g., in image creation). The international R&D community has developed some mitigation techniques, like RAG (Retrieval Augmented Generation), but much work remains to create safe and responsible Generative AI systems.
I believe that Taylor & Francis can play a pivotal role in AI adoption, both within the academic community and for future AI research
With several data quality and content integrity initiatives already underway and more planned, I believe Taylor & Francis can play a pivotal role in AI adoption, both within the academic community and for future AI research. As a publisher of refined and complex academic content, Taylor & Francis is uniquely positioned to enhance the veracity, accuracy, diversity, and inclusivity of the world’s most valued knowledge. This knowledge can then be used to 'educate' AI to be more just, safe, and factual, in contrast to the misinformation often sourced from unregulated social media, unverified blogs, biased news, and potentially distortive data.
Artificial Intelligence may become a turning point in human history, and while we must guard against excessive disruption, I am confident AI can empower knowledge workers in ways previously unimaginable. Human creativity and academic output will gain new tools for searching and evaluating past research, accelerating advancements across all fields in unprecedented ways. AI assistants and automated research workflows will boost our capacity to address diseases, hunger, natural disasters, and, if well-regulated and applied, help strengthen global safety, well-being, and equality.
Taylor & Francis is uniquely positioned to enhance the veracity, accuracy, diversity, and inclusivity of the world’s most valued knowledge. This knowledge can then be used to 'educate' AI to be more just, safe, and factual, in contrast to the misinformation often sourced from unregulated social media, unverified blogs, biased news, and potentially distortive data.
Olga Tchivikova, Director of Operations, Researcher Services, Taylor & Francis
As the Director of Operations at Taylor & Francis, my team supports editors and authors from submission to dissemination, using AI tools to guide researchers to the right journal and ease the submission process. I am excited about using AI technology in order to tailor our services to different communities and make the publishing process more personalized.
In addition, we use AI to maintain research integrity and ethics, especially in the face of challenges such as paper mills and bad actors. Our teams use various AI- and Machine Learning-powered tools which scan submitted content against millions of published works to identify similarities and ensure compliance with academic integrity standards. Along with this, we collaborate closely with other internal teams to ensure AI is used responsibly.
Keeping humans in the loop
Our Publishing Ethics and Integrity team, led by Sabina Alam, is pioneering the responsible use of AI in the publication process, and we are supporting its execution. The team has access to tools to assess papers in more detail when concerns are raised. These include Proofig and ImageTwin, AI-based tools that help spot cases of image duplication or manipulation.
For me, human creativity is the raison d'être for AI
AI is an enabler for human progress, and it’s important we keep humans in the loop at every step of the way in order to address challenges such as bias. For me, human creativity is the raison d'être for AI, as its potential to automate labor-intensive processes frees up time for us all to be creative – whether we are researchers or in the publishing industry.
Sunil Nair, Global Editorial Director for Science and Medicine, Advanced Learning, Taylor & Francis
I am the Global Editorial Director for Science and Medicine in Advanced Learning, but I also wear another hat as AI lead. That involves a whole host of activities, tasks and goals. I can share several examples.
Authors are increasingly using AI and need more guidance on how they can use it well and in a responsible manner. Paolo Lombardi, Director of Technology Innovation, Andrew Bostjancic, our Senior Policy and External Affairs Manager, and a number of us worked together to bring our AI guidelines up to standard.
In addition, our Vice President & Assistant General Counsel for Taylor & Francis Corinne Militello and I created a separate document for Advanced Learning (books) which provided more detailed information and use cases for authors writing in long form. Corinne and I also work closely with the editorial department to answer questions on when the use of AI by an author abides by our policy and when it falls foul of it. This often involves complex decision making as many of the cases are not clear-cut. But what I am most excited about is using AI in Advanced Learning (AL) to develop tools to make authors' lives easier.
I strongly believe that AI will revolutionize long-form academic writing
The technology is developing so rapidly that it's very hard to keep up. Thankfully, we have a great deal of expert colleagues who are very good about sharing key news and developments.
Another challenge is the perception of AI. Some people are embracing it wholeheartedly, while others are wary, even scared, of the technology. I fall on the side of being an AI optimist, but I am not naive about its many dangers and the need to bring people along as we use AI judiciously to advance as a company and as a society.
Setting clear policies for authors
I am really excited about the changes AI can bring for both internal processes and authors. It is important we give absolutely clear advice to our colleagues and authors. I think we'll make many gains by using AI to improve the publication process. A recent example is a project I worked on which involved using machine learning to optimize the release dates of our paperbacks. AI sees patterns much better than humans, and this allowed the external consultants we worked with to build an optimization model which should benefit our book authors and readers.
There should be many more use cases out there waiting for us to discover. On the author front, I think we will soon be able to dramatically reduce the gap between short-form (article) and long-form (book) writing. I'm keen to develop and use author-assisted writing tools to make it much easier and faster for our authors to produce their content. One simple example is the availability of inexpensive AI translation to unlock content from other languages. We released a tool in January which will make it easier for our authors and editors to generate such content. I strongly believe that AI will revolutionize long-form academic writing.
Leon Heward-Mills, Managing Director, Researcher Services
AI is ubiquitous. We use it actively in the proliferation of search engines that are AI-empowered, and tools such as Copilot that we use for meetings and other day-to-day office shortcuts. It's on our phones, so we’re using AI passively, sometimes not even realizing that we are using it.
Everyone is talking about generative AI, and the change it’s brought about in the last year or so. I think that the reality is that there's been a slow and steady creep into day-to-day business and personal life. It's not quite like electricity yet, but it's getting to be.
We have AI tools that we use in peer review, for example, we use a Journal Suggester tool to screen submissions and match them to appropriate journals using journal aims and scope and published content. We've got tools that help make research easier to discover, access, and understand, as well as tools that summarize, reformat, translate, and tag content. AI makes it easier for researchers and readers to find relevant and trustworthy content.
AI is not quite as ubiquitous as electricity yet, but it's getting to be!
In addition, we've got tools that help us with ethics and integrity issues. This has become increasingly important because we are dealing with inbound submissions that are using AI to generate bogus content and that are becoming increasingly sophisticated. Partnerships in the sector with organizations such as the Committee on Publication Ethics (COPE) and the International Association of Scientific, Technical, and Medical Publishers (STM) help make sure we stay on top of how bad actors are using AI to produce fraudulent research.
It is a rapidly iterating process
We use off the shelf tools and tools we build ourselves. It is a rapidly iterating process, it’s expensive and time consuming, but it's a fundamental part of what we must do as an academic publisher.
We've been experimenting with AI for years in terms of knowledge graphs; seeing where there are patterns and connections and where new research is emerging. I think we can choose to think about AI as a threat and a disruption, but I think if you focus too heavily on that you risk ending up on the wrong side of the fence. The CEO of Informa (Taylor & Francis' parent company) Stephen Carter has talked of companies being AI winners or AI losers. My role is to ensure that we remain on the right side of that divide. Ultimately, we want our customers to 'win' at AI, too.
The challenge is ensuring that we know enough about the technology to make the right choices. We need to take the time to learn and to understand the opportunities, and then as an industry, as an organization, as individuals in the organization, to jump on the opportunity that brings, whether it's experimenting in our personal lives or thinking about business opportunity, approaching it in the right way with the appropriate safeguards.
The partnerships we choose now are crucial. We have chosen partners who are aligned with Taylor & Francis on the importance of authors’ rights. We’re making sure that guardrails are in place to ensure that trusted content is used in the right way. As a publisher, and as a knowledge business we must be actively involved because this is the clear direction of travel where academic publishing, academia, and research is going.
Properly harnessed AI where humans remain in the driving seat will speed up research to translate from bench to bedside. Cutting through all the noise, all of the discussions and opinions on AI you’re hearing right now, for me this is its fundamental role, and what I am most excited about.
Properly harnessed AI ... will speed up research to translate from bench to bedside ... for me this is its fundamental role, and what I am most excited about
Contact Information and related links
Related links
- AI for academic research at Taylor & Francis
- AI Policy at Taylor & Francis
- Artificial Intelligence HUB - Routledge and CRC Press
- COPE's position on authorship and AI tools
- Generative AI in Scholarly Communications (STM)
- Dr. Sabina Alam discusses her work with publishing ethics and integrity
- What's stopping AI regulation?
- Artificial Intelligence and the environment