The Medicaid Outcomes Distributed Research Network MODRN [Podcast]

By | October 10, 2022

The Medicaid Outcomes Distributed Research Network (MODRN) was started by AcademyHealth and is a collaborative research network of state Medicaid agencies and university partners. The goal of the network is to allow analysis and learning about Medicaid by facilitating comparison across states and aggregate data with a shorter lag time than other available sources.  This week I was able to interview Dr. Julie Donoue who is the corresponding author of a recently published Medical Care paper detailing the network. You can listen to the audio here, or wherever you get your podcasts. The written version below has been slightly edited from the audio version for readability.

Background

Jess Williams: Today we’ll be speaking with Dr. Julie Donohue, who is a Professor and Chair in the Department of Health Policy and Management and the Graduate School of Public Health at the University of Pittsburgh. She’s the director of the Medicaid Research Center as well. Welcome to Healthy Intersections, Julie.

Julie Donohue: Thank you. Nice to be with you.

Jess Williams: In the September issue, you and a group of colleagues published an article describing the initial development and use of the Medicaid outcomes, distributed Research Networks modern for short, would you mind giving our listeners who haven’t had a chance to read it yet, a little summary of what modern is and how it can be used?

Julie Donohue: Sure. So Medicaid, as most readers of medical care will know, is a 50-state program that’s jointly financed by states and the federal government and really administered by each of the separate states. So there’s a lot of variation in Medicaid policy populations and programs, kind of infinite amount of variation. And we don’t have a great analytic infrastructure for really, kind of learning from that variation, we don’t have a Medicaid lab. So the benefits of federalism and the opportunity for states to innovate with their own programs, and then to learn from that innovation are really kind of missing. So MODRN was, was founded to try to fill in some of those gaps. And so it’s made up right now of 13 University state partnerships. So these are universities that have master agreements with their state Medicaid agency and provide ongoing research and analytic support for those Medicaid agencies. They know the Medicaid data, and they know the Medicaid programs and their states really, really well. And they’re, they’re accountable to the state policymakers that the state agency leaders or their clients. And so MODRN creates a kind of analytic infrastructure and an organizational infrastructure that allows those 13 state university partnerships to work together on common research projects, without sharing data. So we don’t have data use agreements, we don’t transfer claims data or individual-level data across states or universities that would have required at least five years of legal, and we would still be working on the data sharing. We really take the distributed Research Network part of this seriously. We at the University of Pittsburgh analyzed Pennsylvania Medicaid data, and our colleagues at the University of Kentucky analyzed Kentucky Medicaid data, we share common analytic code. So we have standardized analyses across states and then we share aggregate results with each other. But that’d be it’s the need for data use agreements. It’s been a great effort and learning experience.

Jess Williams: Yes, it’s very exciting. My next question is really how did the initial group of partners connect and who is the convener of the group?

Julie Donohue: Yeah, so to really understand the kind of the history of this relatively nascent effort, you have to look to AcademyHealth and two state-based policy networks that they have supported for a long time. One is the Medicaid Medical Directors Network. And this is a group of clinical leaders, chief medical officers, and Medicaid medical directors in 43 states. The Medicaid Medical Directors Network used to be part of the National Association of Medicaid directors, but they have kind of split off maybe 15 or 20 years ago to be their own group. And they have had some experience with multi-state Medicaid research. But they felt like they were always sort of starting over again with each new project and didn’t have the infrastructure that would support multiple projects. Academy Health provides staff support and serves as a convener of the Medicaid Medical Directors Network. They also support suppling, which stands for the State University Partnership Learning Network, and that’s a broader group of about 30 state university partnerships. And so MODRN was formed out of, you know, a little bit more than a third of the supply in partnerships, and really has the involvement of staff at AcademyHealth. So I think, you know, these professional networks, both the Medicaid Medical Directors Network, and the state university partners had for a while wanted to kind of create a true Research Network, and MODRN was the outgrowth of that.

Harmonizing Data and Methods

Jess Williams: That’s great. I’ve been lucky enough to participate in one collaborative effort using Medicaid data, again, from several states with this distributed model where each state does its own analysis. And then we kind of try to combine them together. Interestingly, it was almost an entirely different set of states, with a few exceptions, which I thought was a little interesting. One issue that we ran into a lot was that some states just didn’t have the same level of detail in their claims. And this was true, particularly in the projects we were looking at, where some states just didn’t have the external cause of injury codes at all. And we’ve realized that eventually, this was partially due to when the claims were taken for the analysis. So in between the coding, the billing, and everything else, when the state actually got the data file. And also some of the states seem to have really different relationships with the entities that actually do the billing, processing, and payment. So, what were the main issues for you all in MODRN when you were trying to sort of harmonize the claims across these different states?

Julie Donohue: Yeah, well, you’ll have to, we’ll have to have another conversation about your multi-state effort, because I’m always trying to learn from other people’s experiences. Yeah, so, we spent quite a lot of time trying to learn about the structure, contents, missingness, and other features of each state’s Medicaid data. And luckily, we were working with partners at each of the respective universities that had a lot of experience. But even they had questions that they needed to take to their state Medicaid partners about, you know, the reliability of certain fields and data elements. And, you know, I think we sort of identified the features, the sort of data features that states had in common, and really tried to capitalize on those and, and shaped our research questions in part around what we thought we could reliably measure across states.

Opioid Use Disorder

Jess Williams: I noticed the first focus of the group was opioid use disorder and treatment for that. And of course, you know, states obviously, have very different tracking regarding the use of opioids, some of which you all discussed in the article. How did MODRN really handle these differences? It seems like it was quite a detail-oriented process to figure out what were the common elements.

Julie Donohue: What we wanted, was to try to have standardized measures wherever possible, but to be flexible enough to allow states to adapt measurement to fit their specific contexts. You know, ultimately, we’re trying to get to an apples-to-apples comparison. But as our Pennsylvania chief medical officer and Medicaid says, we know that we’re going to be comparing Macintosh to Fuji. And that there’s, there’s so much variation in how in the sort of, you know, the capacity of the substance use disorder treatment systems, the benefits, the services that Medicaid pays for the provider networks, their use of managed care, their specific managed care arrangements, whether they have carve-outs or not, we could spend the whole program talking about the sources of variation. We’ve been surprised, I think, by the great extent to which we’ve been able to create standardized measures of opioid use disorder treatment, we’ve had to make a few modifications and how we measure medications for opioid use disorder. But for the most part, those are standardized, where we’ve had to probably make the most alive there to areas where we’ve had to tolerate variation in the kind of coding scheme that we apply state by state, our telehealth, which has been a big deal, the last couple of years, and residential treatment where there’s almost there’s very little overlap in the kind of billing and payment policies at the state level, and then how those things are how those services are measured in claims and encounter data.

So again, it’s this sort of, you know, somewhat painstaking process of developing a standardized way a standardized measure, submit that to all of our university partners, who then ask their state Medicaid agencies. Okay, this is how we’re thinking about measuring this service. Does this make sense? Or if our university partners have no experience in measuring your particular capturing or particular service, they’ll ask their state Medicaid agency, then we try out lots of different approaches to measurement and obviously draw on the literature and other claims-based studies and algorithms that have been used. So that’s, that’s a very long answer. But most of the things we’ve been able to measure in a very standardized way, there just been a few areas where we’ve had to allow for, you know, the Delaware and Pennsylvania and Kentucky and West Virginia specific measurements. And then we’re just, you know, obviously as transparent as can be in all of our reports and manuscripts about how we’re measuring things. So people can replicate what we’ve done, improve on what we’ve done, and hopefully extend it.

Jess Williams: That’s great. What were the most surprising results from your analyses of opioid use disorder? At least the ones you all have completed so far?

Julie Donohue: We knew that we would find a lot of variation in rates of use rates of treatment. I think we were surprised, in, you know, in particular, at how much use of residential treatment varied across states, and I think it speaks to where states are in paying for residential treatment using Medicaid funds. There, you know, how recent their waivers 1115 waivers, SUD waivers are are with they’ve been paying for that service for a long time. And that we think it also reflects just the sort of generosity of the substance use disorder benefit and the capacity of the system in the States. So we were I think we were particularly surprised at how much state variation there was in the use of that service.

Policy Differences

Jess Williams: That makes sense. In addition to the sort of difference in how states handle things like opioid treatment or residential treatment, obviously, there are lots of differences based on who expanded Medicaid, you know, which waivers they have and what they’ve used the waivers for. So does MODRN have any plans to sort of continue tracking those differences in the state policies over time?

Julie Donohue: Yeah, I’m so glad you asked about this because I think this is one of the biggest challenges for Policy Research. And for Medicaid policy research in particular. You know, there are wonderful sources of information on some sources of state policy variation. You know, the Kaiser Family Foundation, collects incredibly rich information. But we found that we needed to go beyond some of the reports that were publicly available from Kaiser Family Foundation or backpack or others to collect information on some of the operational decisions that states were making. And, you know, that the first thing that our state colleagues asked us when we showed them, you know, here, here’s how, you know, your nine states compared to one another in terms of trends in opioid use disorder treatment, they immediately wanted our state Medicaid agencies immediately wanted to know, why, what is the policy variation? That’s, that’s leading to that.

I think there are a couple of challenges. One is collecting this information. And keeping it current is incredibly difficult. Because there’s no one person in the Medicaid agency that has answers to all the questions you might have. On, you know, what providers and services are covered? What limits? How does managed care pay for it differently? What are the utilization management tools that are applied? What are the limits and exceptions, so you have to collect the information from multiple people. And then they don’t always have the institutional memory to say what happened in that state five years ago, they may know what’s happening currently, but they don’t have sort of, you know, policy analytic data set in their head that allows you to sort of map that onto your claims data for a difference in difference analysis. So I think there’s really a need for investment in collecting information on the policies themselves. And then, you know, understanding how I’ll people might code different policies, they might sort of reach different judgments about how to categorize a particular policy. And I think we also need to really move away from policy as expressed as a one or a zero.

Jess Williams: I think that makes sense.

Julie Donohue: And then the final challenge is, you know, we have 13 states and about 100, relevant policies.

Jess Williams: The math on that doesn’t look good.

Julie Donohue: Yeah. So what can we do, you know, analytically, to try to think about clusters of policies, where we can test hypotheses? I think there’s a whole research agenda just around really mapping the policy landscape fully. You could build a whole career just by focusing on Medicaid not even thinking about all the other policies that are enacted at the state level. So yeah, I don’t have I don’t have I have more challenges and questions than answers on that front. But I think it’s incredibly important.

Jess Williams: Now, that’s a really good point about how someone could actually build an entire career just tracking these differences over time. Yeah, I know, in our study, we had one situation where almost everyone who worked with the state Medicaid agency turned over one year. So then you lost that institutional memory immediately. It took a while to build that back up.

Opportunities for Collaboration

Jess Williams: If someone had sort of a policy or research question that could be answered with the data put together by modern AI, is there any process to approve new research questions? Or do researchers really need to be members of the modern network first?

Julie Donohue: Yeah, great question. And I think we’re still figuring this out. You know, so figuring out what we want to be when we grow up. So, you know, we really spent, we’ve been at this for a little bit under five years, and we really spent the first five years focusing on how do we build? How do we build the plane while flying it to use an overused metaphor. And how do we make modern work for the current partners, but we are increasingly getting, you know, these questions from other groups, one of the things that we sort of put into place pretty early on. And I think, you know, what we what we’ve tried to do is not reinvent the wheel, but learn from other research networks. And we, we had heard that there’s a cohort study called Mesa, which is the multi-ethnic study of atherosclerosis, it’s a very long-running cohort study. And they had a really awesome kind of policy for, for reviewing and approving paper and grant proposals from Mesa investigators, and maybe from outside investigators to I don’t know, and we kind of took their, their document, which was up on the web, it’s like a 20 page, you know, policy and sort of procedures, and adapted it for our purposes. And so we have a structure in place and a committee to review proposals for partnering with modern for reusing some of our metrics, either for our own internal work that says, you know, we wanted to take a measure that was developed with the collective input of modern and use it just for a Pennsylvania project, you know, we get we have a mechanism for getting approval for that if someone wants to write a grant using modern, they can go through the same process and submit a proposal so we can, we can use that process and that set of policies to kind of evaluate outside groups request to partner and I think we’re very open to whether partnership requires modern membership, whatever that means. We have pretty porous borders. Again, I think, you know, we’re kind of making this up as we go and as new partnerships are presented to us, we will adapt and change in response. And we want to be open to new opportunities. I think we created this network with a focus, in particular on substance use disorders to begin with, but we wanted the organizational and kind of technical infrastructure to be useful for lots of different questions, whether it’s maternal morbidity or mortality. Or, you know, child health or chronic condition, treatment and Medicaid, lots of or, you know, the unwinding of the maintenance of effort requirement. At the end of the public health emergency, there is a host of pressing questions in Medicaid policy, and we’re, we’re definitely open to partnering. We are all about collaboration and are excited to hear from people interested in engaging with us. And I think we have a number of pathways to collaborate with other groups.

Lessons Learned

Jess Williams: My last question of the day is for other people interested in starting cross-state collaborations, either with Medicaid or with some other data source. What initial advice would you give them based on your experience with MODRN?

Julie Donohue: Yeah, so you know, try to find a group of really fun people who are fully committed to making the network work. And, you know, I think, finding people who have sort of shared, I think one of the things that made modern work was that we all have this experience, doing this kind of client-oriented work for Medicaid agencies, that common denominator has been very, very helpful. Because we all know, both the advantages and the constraints that are operating in those kinds of partnerships. Another piece of advice is, I would say, to try to have some early products.

And, to go along with that, don’t try to figure everything out, harmonizing data across multiple sites is really challenging. And don’t try to bite off. More than you can chew, don’t try to solve every problem with data harmonization focus on the key data elements that you need to move your project forward. When we set out to create our common data model, we intentionally called it common data model 1.0. Because we knew we would not anticipate every problem or issue with the data and we wouldn’t be able to solve everything all at once, and that there would be a common data model 2.0. And indeed, there was, then now we’re going to work on common data model 3.0.

I think the final piece of advice is, don’t reinvent the wheel. There are some really robust, very well-resourced research networks, distributed research networks, and we tried to learn as much as we could from their efforts and, you know, beg, borrow and steal what we could from things that they had already figured out. So I talked about Mesa, their kind of policy for vetting papers and proposals was really our jumping-off point. The Sentinel initiative FTAs post-marketing surveillance system, that’s a distributed Research Network. Some of the folks involved in developing Sentinel were incredibly generous with their time in answering some of our early questions. And then we also use their common data model as a jumping-off point for building ours. So, you know, look around and try to learn from the other research networks out there. I’ve found that people who are involved in this kind of work are pretty generous and are happy to save you time and from learning the hard way things that they have already learned. So don’t hesitate to reach out and ask questions.

Jess Williams: Sounds like excellent advice. Well, thank you so much for joining us today.

Julie Donohue: It was my pleasure. Thank you so much, Jessica.

Transcribed by https://otter.ai

Jess Williams

Jess Williams

Associate Professor at The Pennsylvania State University
Jessica A. Williams, PhD, MA is an Associate Professor of Health Policy and Administration at The Pennsylvania State University. Dr. Williams has been a member of the editorial board since 2013. Her research examines how workplace psychosocial factors affect the health and well-being of employees. Specifically, she investigates the role of pain in work disability and well-being. In addition, she researches the utilization of preventive medical services. She holds a Doctorate in Health Policy and Management from the UCLA Fielding School of Public Health, a Master's in Economics from the University of Michigan, Ann Arbor, and a BA in economics from Stanford University.
Jess Williams
Jess Williams

Latest posts by Jess Williams (see all)

Category: All Health policy Healthcare costs & financing News Podcast Tags: , , , , , ,

About Jess Williams

Jessica A. Williams, PhD, MA is an Associate Professor of Health Policy and Administration at The Pennsylvania State University. Dr. Williams has been a member of the editorial board since 2013. Her research examines how workplace psychosocial factors affect the health and well-being of employees. Specifically, she investigates the role of pain in work disability and well-being. In addition, she researches the utilization of preventive medical services. She holds a Doctorate in Health Policy and Management from the UCLA Fielding School of Public Health, a Master's in Economics from the University of Michigan, Ann Arbor, and a BA in economics from Stanford University.