At our recent webinar, we were joined by Maria Arregui, Ph.D. Assistant Director, Evidence Generation and Communications at Xcenda; Chris Waters-Banker, Ph.D. Director of Consulting Operations at Maple Health Group; Rajshree Pandey, Ph.D. Associate Director, Evidence Synthesis at Curta; and Lee-Anne Bourke, MLIS, Account Executive at DistillerSR for an engaging discussion on how to streamline health technology assessments (HTAs) by automating literature reviews and provide an efficient, effective and high-quality health assessments. The conversation was moderated by Dr. Patti Peeples, President at The Peeples Collaborative.
Q: What kind of industry trends have you experienced in the last couple of years that’s making you rethink how you’re doing evidence assessments?
Rajshree: Health technology assessment is multifaceted. It includes clinical evidence, health economics evidence, real-world evidence, and the patient perspective. The aim is to adopt high-quality methods and incorporate the most recent evidence to inform evidence-based decision-making. Systematic literature reviews help to synthesize this evidence to inform our research questions, but the time and resource constraints are there. There are diverse evidence sources and topic areas, leading to a high volume of literature. Another challenge is the HTA deadlines, because there’s a need to constantly update the evidence portfolio while ensuring a quality methodological process.
Maria: Data is generated at an unprecedented pace, and there’s an increase in the number of health technologies being investigated today. There are over 18,000 active studies on clinicaltrials.gov, and this is because there is a need for faster patient access to effective health technologies, especially for indications with significant unmet needs. Also, the evidence included in the payer submission must be three to six months old at most, which means the longer it takes to complete a review, the more information that’s required to remain updated. It is also very challenging to coordinate systematic reviews when they’re done remotely in different time zones. Managing all the moving parts and tracking the reviewers to get things done in an efficient way is tough.
Q: Increasingly, we are hearing about living systematic literature reviews (SLRs). Could you briefly explain what living SLR is about and how it is being used by pharma and HTA bodies?
Maria: A living systematic review is a method to update systematic reviews. With this method, the systematic review is continually updated. The aim is to incorporate relevant new evidence as it becomes available. The core methods for living systematic reviews are the same as those for traditional SLRs but incorporate additional elements, namely the ability to pre-specify update frequency to determine when to incorporate the new data. An advantage of living systematic reviews is that they can be conducted when there’s a lot of uncertainty in the existing evidence and when emerging evidence is likely to impact what we currently know.
Q: What process changes have you implemented at Maple Health to address some of these trends with increasing data volumes, complexities and people requirements?
Chris: The most significant thing has been leveraging automation with a tool like DistillerSR. The tool comes with a lot of features that help to mitigate these challenges and make them easier and more manageable. The flexibility allows you to apply the methodology you need to each project. DistillerSR allows me to systematically apply some of the differences in methodologies and still feel very confident about the outcome.
Q: What are some of the greatest benefits you have seen in running your departments with the implementation of an automated platform?
Chris: One of the biggest benefits of automated platforms in managing the broadness of evidence is being able to structure the screening forms to accommodate it. The same process was clunky and error prone with spreadsheets, which makes automation a huge time saver for us. The use of labels and filters also helps to control the workflow as you continue the screening process, an example was a client who needed the cost-effectiveness analysis input on a 10,000 citation SLR that would require days and weeks because of the volume of data but DistillerSR made it easier because of the labels and filters feature. An AI reprioritization tool that uses natural language processing is also important, as this helps to build an algorithm based on the inputs made by the screeners. With DistillerSR, we discover 95% of relevant includes after screening approximately 30 to 40% of our total volume. Depending on our starting volume, using reprioritization means we can move to the next phase of screening days or even weeks ahead of schedule.
Q: What were you doing before leveraging automation, and what changes are you seeing in the quality of your process after implementation?
Rajshree: Spreadsheets were the most common mode for creating literature reviews before automation or the introduction of AI. Some of the challenges with excel is that they are error-prone, and quality checks are a challenge; which makes resolving conflicts time consuming and difficult to track. At Curta, we use DistillerSR, which helps us speed up the literature review process and provides a cleaner workflow with better discrepancy resolutions, task assignments, and timeline management. Our processes have been transparent and reproducible since we moved from manual to automation. Collaboration with DistillerSR is much easier because it allows multiple users to work on projects at the same time.
Q: Could you discuss the state of HTA guidelines for reporting as it pertains to automating the SLR process? What is acceptable, and where do we stand with clear guidance from these HTA bodies?
Chris: I haven’t seen any guidance come out from HTA bodies related to how they would prefer AI to be used for literature reviews or whether they allow it. We think they are using it, but we just need to know what they find acceptable. The PRISMA 2020 reporting guideline specifies AI as an acceptable reason for exclusion, especially at the extraction phase. The Cochrane handbook also has integrated the use of AI technology in literature reviews. The acceptability is what we don’t understand.
Q: One of the main barriers to the adoption of AI SLR tools is the absence of sufficient evidence-based reports on the effectiveness of AI techniques. How does AI or automation help with empirical methodological research, and how do they generate the evidence?
Chris: This may be why we are not receiving any guidance from HTA bodies. One of the barriers is that some of the AI tools are proprietary, and so it’s hard to be as transparent as possible. It’s important to be clear, open, and honest that these tools augment the human experience. Humans have to train and run the program, and we are just using tools to help. I have personally seen situations where the computer has made better decisions than human screeners. Automation allows for unbiased decision-making simply because it’s being trained to do so. DistillerSR helps to meet the standards for SLRs, and I think it’s a platform primed for living systematic reviews. I’m really hopeful for the future that we’re going to get this guidance soon.
Q: How has your organization ensured that there are human touchpoints in the automation process?
Rajshree: The human touchpoints are important and start with the protocol phase. If the priorities and strategies are set right, it allows automation to be more effective and reproducible. There’s a lack of consensus on the use of automation, but I think a lot of awareness and discussion are needed to show its importance.
Q: How does DistillerSR help you utilize your staff more efficiently at Maple Health?
Chris: It’s always a good idea to allow your best screeners to work with a tool like this first, because they understand the contents and references you’re looking out for. When 90% of the screening volume is realized, you can move the expert screening group to full-text screening and bring in the junior screening group to ensure there are no includes captured as excludes. This allows the phases and processes to run quicker.
Q: Give me your favorite feature of DistillerSR and explain why?
Maria: The deduplication feature is one of my favorites. You can deduplicate references in DistillerSR. I run the deduplication at the start of my project and whenever there’s an update to the systematic review. I also use it when I need to upload an additional set of references — it’s a time saver.
Rajshree: I really like the reporting process — the inclusion and exclusion decisions, and then you can smoothly get the PRISMA flowchart without creating it manually. In addition to that, doing quality control allows for a very robust and transparent review process. Also, the ability to manage users and projects by assigning them to different levels and tasks saves time.
Chris: Exporting reasons for exclusion is one of my favorite features of DistillerSR. This is something that used to take hours to do but can be done in minutes with this tool. You can isolate your excludes if you set up your screening forms by your PICO’s criteria. You export to Excel and sort.
Q: Does the PRISMA generate automatically, and is there automation for the extraction/validation phase?
Maria: You can generate the PRISMA 2020 automatically in DistillerSR and export it to Excel. For extraction, we create screening forms with a set of questions that the reviewer needs to screen — you can customize them specifically to your projects.
Chris: DistillerSR offers some more standardized forms for quality appraisals. You can pull up specific quality appraisal type forms or make your own depending on the needs of your project — it will require a human to fill out the information, but the forms keep it nice and tidy.
Q: How does DistillerSR address transparency challenges in literature reviews, and how do you use it for conflict resolution?
Maria: DistillerSR allows you to monitor screening processes in real time, which means conflict resolution is easier to manage between the screeners. The tool allows us to pair junior and senior reviewers to work on the same set of references, and this helps with the quality assurance process, which saves a lot of time and ensures we don’t miss relevant references. The junior reviewers also learn and gain more confidence during this process, while the expertise of the senior reviewer is utilized in a wider range of references.
Q: How has DistillerSR helped you manage literature heterogeneity?
Rajshree: The volume of literature and diversity available has made this a big challenge, but with DistillerSR, we’ve been able to label and tag studies from supplementary searches, conferences, and databases during screenings. The keyword highlighting feature helps to draw the reviewer’s attention to useful information by creating different colors during the screening process. This also prevents rescreening which can be time consuming.