How AI helped Syneos Health’s Matthew Snodgrass improve client first drafts

Reducing time spent parsing regulatory rules could be a gamechanger.

Regulatory AI

Working through the maze of FDA, FTC and other regulations that govern communications around pharmaceuticals and other healthcare items can be challenging for even the most experienced human to handle.

But an AI will never get tired, rarely get confused and can be updated with just a few clicks of a mouse.

Matthew Snodgrass, AI innovation lead at Syneos Health Communications, is currently testing a custom GPT that will help create cleaner drafts of regulatory-compliant content – but that can never fully replace the discernment and judgment of a person.

Here’s how AI helped him.

Responses have been edited for style and brevity.

 

One of the thorniest problems for communicators in regulated industries is figuring out what the heck you can and can’t say legally. Tell me how this idea came about and how you’ve been working on this GPT.

At Syneos, the other, larger, half of our family is in clinical trials. So dealing with patient information has very strict rules and regulations that we deal with internally, very strict privacy policies, data retention and collection policies that we have. On the communication side, which is typically a little bit more free to experiment and communicate, we’re still beholden to those strict rules, which, in a way, is very good, because it puts us in the mindset of, we have to be very responsible, both from a data privacy and an ethics standpoint on how this is used.

I’ve been working with my colleagues to find out what problems do you have, what issues can be solved? Were there things that could be sped up or done better, faster? I decided to turn inward, because one of the other hats I wear is counsel on rules and regulations as it comes to pharma marketing, for rules and regs from the FDA, FTC, U.S. Code of Federal Regulations. I thought, if I can combine all of the actual regulations and rules from federal entities, along with my expertise and knowledge and interpretation of those rules, could we create a GPT that kind of mimics the interpretation of them so that we could use it to look at and analyze proposed content before it gets to the client.

What happens a lot of times is the MLR — medical legal, regulatory —  teams at pharma clients will look at a piece of content and send it back and say, ‘you can’t say this, and if you say this, then you have to say that, you can’t use this picture with this’ and so on and so forth. So if we can create a tool that helps to get ahead of that and produce a better product, just speed up the process and have us be able to scale beyond just having content flow through just one person or a couple people.

So this is not replacing human oversight. This is helping just get a cleaner draft to the client, essentially,

Exactly. You effectively hit the nail on the head of summarizing how I recommend using AI is use it as draft only trust but verify. It’s always going to need the human element to verify.

What I’m hearing is that (people) fear that AI takes over everything. And that’s not going to be the case. What I hear some clients may want is that humans are involved a little bit, but AI speeds up everything else and everything’s cheaper and quicker. And that’s not necessarily the case either. It’s going to be a mixture where we will work together with an AI on things like research, drafting things that, together with the context of a person and the speed and volume of information with an AI, you can produce a better output. We’ll hand off to AI those elements that they can just simply do better, like analysis, summaries, looking at large volumes of information and distilling it down. But we’ll keep the elements that currently only humans do well, which is strategy, creativity, content development, the truly, very human-centric elements.

Have you gotten to the point where you’re talking with clients about this GPT, and if so, what’s the reaction?

The conversations that we’re having with clients are very similar to the ones we had 15 years ago with social media. Some of them are really pushing because of internal champions to be at the forefront of experimentation and trying it out. Some are behind because they may be a small biotech that’s really focused on their research and development and just don’t have the resources to push the AI envelope yet. It’s very similar.

Have you had anyone at the other end saying, I don’t want AI on any of the materials you’re working on for us? Have you gotten that reaction?

Yes, and it’s been for different reasons. One, they’re not so sure about it. Or what I see often is they may hop into Copilot, they ask a very simple prompt that may not be a comprehensive prompt, and they get a non-comprehensive answer. They go, ‘oh, that’s not good I don’t want anybody using it.’ Or it’s the comms team that really want to push the envelope, but it might be their legal team that is not ready to let them get to that point yet, because they don’t have their ducks in a row yet.

Tell me a little bit more about how you’re going about building your regulatory GPT. What phase are you in with that process?

I would say we’re in the alpha phase right now, as we have a proof of concept built, and I’m continuing to train it. I created a 16-page missive on how I interpret FDA and FTC and U.S. Code of Federal Regulations, rules. I keep testing it with queries, and it may come back with something that’s not quite right. So then I go back to the document. Update the document, re-upload. I feel like I’m opening its brain, tinkering with it, closing it again. And then go back. I feel it’s confident enough that it can help our colleagues and help clients. Then we would unveil it as an internal usage tool. It’s getting there.

For more on the fast-changing world of AI, join us at Ragan’s AI Horizons Conference in February

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.