From Excel Sheets to AI-Powered Evidence: A Conversation with Stryker Neurovascular’s Medical Writing Team

by | Mar 4, 2026

From Excel Sheets to AI-Powered Evidence: A Conversation with Stryker Neurovascular’s Medical Writing Team
At a recent webinar, DistillerSR’s Nicole LeDrew sat down with Brie Paddock, Medical Writing Manager, and Laryssa Ballrich, Staff Medical Writer, at Stryker Neurovascular to talk about how their team transformed their clinical evidence workflows — and what they’re watching closely as AI reshapes the regulatory landscape. Both Laryssa and Brie work within the Global Clinical Regulatory Support function, focusing on Clinical Evaluation Reports and post-market clinical follow-up (PMCF) for a portfolio of predominantly Class III medical devices.

Let’s start at the beginning. What did your literature review process look like before DistillerSR?

Brie: It was fragmented, to put it simply. Some people were working in Excel sheets, some were extracting data directly into CERs, and a large portion of our team were contractors — so when someone left, their knowledge of how they’d done the work often went with them. There was no standardized way to deduplicate references, no consistent screening approach, and very little traceability for where data was actually living.

Laryssa: Everyone had their own system, which meant everyone had their own version of the truth. Bringing everything under one platform gave us a single way to screen, extract, and deduplicate — and it meant that data stayed with the organization, not with the individual.

Was there a specific moment that pushed you toward change?

Brie: I wouldn’t say there was one dramatic moment, but the MDR transition was a real forcing function. As we worked through the shift from MDD to MDR, you start to see clearly where the process breaks down and where you’d want to do things differently the next time. For us, the driving force was repeatability. We have a lot of Class III devices. We’re doing annual updates. We needed a process that could scale, not one that depended on institutional memory.

How did DistillerSR change how you work day to day?

Laryssa: The biggest shift was having everything in one place — the audit logs, the screening decisions, the extraction data. You can see exactly how a review was conducted, what inclusion and exclusion criteria were used, and who made what decision. That continuity is invaluable, especially when team members change.

Brie: For me as a manager, it changed how I can oversee work. I can’t spend as much time in every individual CER as I used to, but with DistillerSR, I know I can find data quickly if a question comes up. Everything has a consistent structure, consistent definitions, and it’s all in the same place. That consistency gives me confidence in the outputs even when I’m not in the weeds of every project.

You’ve talked a lot about templates. What role have they played in your process?

Brie: They’ve been foundational. The goal was never to make every CER identical — that’s not realistic across a portfolio of devices with different clinical evidence landscapes. But a good template gets you 80% of the way there. It gives your team a jumping-off point so that you’re not starting from scratch every time, and it ensures that the core questions are being asked the same way across every project.

Laryssa: We also use them heavily in training — both for new team members and cross-functionally to show other departments how we collect our data. When the questions are standardized and the forms are clear, it’s much easier to get everyone on the same page quickly.

That brings up something interesting — data harmonization. What did that process actually look like in practice?

Brie: Harder than you’d expect. Something as seemingly simple as “number of patients in a study” turned out to be incredibly ambiguous. Is it the total enrollment? The treatment arm? Patients at baseline or follow-up? We wrote a form we thought was clear, rolled it out, and got very different answers from different reviewers. That forced us to have real conversations about definitions — how do we define this? How should it be extracted? What does this number actually mean to someone reading the data six months from now?

Laryssa: Those conversations were uncomfortable in the moment, but they were essential. And what we realized later is that they were great preparation for working with AI — because the discipline of writing a precise, unambiguous question for a human reviewer is exactly the same discipline you need when writing a prompt for an AI model.

Speaking of AI — where do you see the biggest potential in a regulated environment like yours?

Laryssa: For us, the most exciting area is using AI to help analyze text and extract information from literature. That’s a huge part of what we do — pulling data out of articles, tables, and reports — and AI has real potential to help with the more standardized, high-volume parts of that work, freeing our team to focus on the complex clinical judgment calls.

Brie: I’d add that we’ve now built up years of carefully curated, harmonized data in DistillerSR, and the ability to query that dataset with AI — to ask it questions we hadn’t thought to ask before, to spot trends, to explore the data in new ways — is genuinely exciting. We’ve done the hard work of creating a clean, consistent evidence base. Now AI gives us new ways to unlock value from it.

Where does the human fit in all of this?

Brie: Centrally. The regulatory bodies want humans reviewing and signing off on submissions, and honestly, that’s the right call for now. There’s an enormous amount of clinical context that goes into reading a piece of literature — the background of the field, the conventions of how studies are written and reported, the nuances of a specific device’s evidence landscape. AI can assist, but it needs that context, and it needs a qualified person to verify what it produces.

Laryssa: I think of it as trust but verify, always. The goal isn’t to replace the expert — it’s to remove the manual burden so the expert can focus on what actually requires their expertise.

Any advice for teams who are just starting to think about adopting tools like DistillerSR or integrating AI into their regulatory workflows?

Brie: Don’t underestimate the process and the work. Technology is the easy part. The harder and more valuable work is evaluating your current process, getting cross-functional alignment on definitions and expectations, and making sure that when you do implement a tool, you’re building on a solid foundation. We spent about six months just getting our templates and document workflows aligned before we fully launched — and that upfront investment paid off.

Laryssa: Start with the harmonization. If your data isn’t clean and consistently defined, AI won’t fix that — it’ll just produce inconsistent outputs faster. Get the data governance right first, and the AI piece becomes much more powerful.

Brie Paddock is Medical Writing Manager and Laryssa Ballrich is Staff Medical Writer at Stryker Neurovascular.

DistillerSR
  • Vivian MacAdden, DistillerSR

    Vivian MacAdden is DistillerSR's Senior Manager, Industry Marketing - Medical Devices. Throughout her career, she has accumulated 20 years of strategic marketing experience in various industries in Canada and international markets such as Brazil, China, Singapore, Jordan and Japan. A problem solver at heart and forever an optimist (and karaoke lover), she is passionate about telling great stories that make a positive impact on the world.

    View all posts

Stay in Touch with Our Quarterly Newsletter

Recent Posts