Augmenting Systematic Literature Reviews

A Human-AI Collaborative Framework

TL;DR: Exploring how LLMs can support human researchers in conducting SLRs. Systematic Literature Reviews (SLRs) are integral to research, but the sheer volume of new publications makes traditional manual methods increasingly challenging. The process is often slow, laborious, and difficult to scale.

Large Language Models (LLMs) offer a powerful way to augment this process, yet their integration raises critical questions about accuracy, bias, and methodological rigor. Simply replacing human effort with AI is not a viable solution.

To address this, we developed a structured Human-AI Collaborative Framework that integrates AI capabilities into an established review method. We empirically evaluated this framework by replicating a human-led SLR, assessing the AI’s performance on tasks like paper selection.

Our results show significant gains in efficiency while maintaining accuracy, provided that human oversight and carefully calibrated thresholds are maintained. The study contributes a practical and replicable framework for researchers, offering guidelines for balancing AI automation with human scholarly judgment without compromising academic integrity.


Read the Full Paper