2 Orthopedists publicly debate the value of National Implant Registries

hip implant registry 2IMPLANT REGISTRIES FLAWED? MURRAY V. LEWALLEN (Orthopedics This Week)

The data proves that registries cannot compare implant designs!” says David Murray. “Going to single surgeon or institutional efforts allow large numbers of patients to be studied very quickly,” says David Lewallen. “What registry studies really do is allow us to ask interesting questions and perhaps direct the next studies.”

This week’s Orthopaedic Crossfire® debate is “The Stuff of Implant Registries: Of Limited Value.” For the proposition is David Murray, M.D., F.R.C.S. from Nuffield Orthopaedic Centre in Oxford, UK; against the proposition is David G. Lewallen, M.D. from Mayo Clinic in Rochester, Minnesota. Moderating is Robert T. Trousdale, M.D. from Mayo Clinic in Rochester.

Mr. Murray: “Registries exist for three main reasons: to compare different types of joint replacement, to compare implant designs, and to provide an early warning for poor implant designs. The primary endpoint for these comparisons is revision.”

“How reliable is this information? I’ll give some examples from unicompartmental knee replacement. Data from a regional registry in the UK—the Trent Registry—showed 10 different total knees followed out to 15 years. Not surprisingly, most would have a survival rate of 90% at 15 years. But there is an implant with 100% survival at 15 years. Everyone who believes in registries says that is the implant you must use. It is the Sheehan Knee…a fixed hinge. I asked the people organizing why this device has a 100% survival and they said, ‘Oh well, that’s because you can’t revise it.’ So, ease or difficulty of revision is perhaps the single biggest determinant of the revision rate. In other words, what matters is the threshold for revision.”

“One of the few things that all the registries agree on is that unis have a higher revision rate than totals. On the basis of this they conclude that unis have poorer results. Therefore they tell surgeons not to do unis. But is the higher revision rate because unis have poorer results, or might it be to do with thresholds? Imagine two patients come to see you in clinic. Both say they have pain worse after the operation than before, and neither has any mechanical problem. I say that if the patient had had a uni, that many of you would revise it because it’s easy; whereas if they’d had a total knee most surgeons wouldn’t because the results would be poor.”

“The New Zealand Registry also gets outcome scores. Not surprisingly, the unis have slightly better scores than totals. What’s interesting is that in New Zealand they subdivide the outcome scores into whether they’re poor, fair, good or excellent. Unis have more ‘excellent’ results, but what is surprising is that unis have less ‘poor’ results than totals. So the difference in revision rate is not because of poor results. Might it be to do with this threshold?”

“The New Zealand Registry also compares the postop Oxford Score with the subsequent revision rate. Patients with an outcome score of less than 20—worse postop than preop—have a high revision rate. What this registry does not draw attention to is that the axes are different. If you plot these graphs on the same axis you see they are hugely different. If you have a total knee with a very bad outcome, 10% are revised; if you have a uni with a similarly bad outcome, 60% are revised. In other words, the difference in revision rate is a manifestation of a different threshold for revision.”

“So one must conclude that registry data cannot reliably compare implant types because of different thresholds for revision. I’d even say that these comparisons are misleading because all conservative procedures—for example, unis—will have a higher revision rate even though they have better results.”

“What about identification of poor implants? Data from the Swedish Registry in 1995 shows that the Oxford knee had a very high revision rate…higher than the Marmor. It was so high that the Swedish Registry thought they’d identified a poor implant, and they contacted every surgeon in Sweden telling them not to use the Oxford. Data from the Swedish Registry in 2005 show that the Oxford was then the best. I conclude that registries cannot reliably identify poor implants.”

“Comparing data from the Swedish, Australian, and New Zealand registries we see that the Repicci is the worst implant in Sweden, the best implant in Australia, and is somewhere in the middle in the New Zealand registry. One can only conclude that registries cannot compare implant designs. You may say, ‘Well, unis are funny things. We shouldn’t do them.’ I also looked at total knee replacements from three registries. The Maxim was worst in the UK, best in New Zealand, worst in Australia. Registries cannot compare implant designs!”

“Revision rate depends not only on the implant, but on the indications and technique. But registries collect little such data, so they can’t adjust for this. So we can draw any conclusion we want. Registry data is important, but is frequently over-interpreted and thus results in misleading conclusions. And if David Lewallen disagrees with this, how can he justify using a cementless total hip replacement when every single registry shows that cemented hip replacements do better?”

Dr. Lewallen: “I agree that there is a tendency to over interpret data from a wide range of studies, not just registry studies. There are different registry types: single surgeon, institutional, health system, single implant, state/regional, national, multinational. What registry studies really do is allow us to ask interesting questions, and perhaps direct the next study to try and find the answers…which don’t always come directly from the data.”

“One study from our institution (Maradit-Kremers et al., 2013) involved more than 10,000 patients with more than 15,000 primary knees. It allowed us to prove that all poly implants were extremely durable compared to modular implants. What registries can’t do is give us early detection of outlier implant performance on a national basis. They don’t provide good measures of community-based experience. How it works at our institution may be different than how it works elsewhere. It’s difficult or impossible to detect surgeon or institutional volume effects without registry type information because studies typically come from high volume institutions. And comprehensive reporting of a full range of implant models and manufacturers is not possible.”

“An example of hip related information comes from another Mayo Clinic study (Howard et al., 2011) giving us 20 years of experience, and it shows a difference in performance with different designs. The question is, ‘Could we have known earlier about some of those that did not perform well?’ Perhaps if we had been looking. If you look back after 15 years you will learn these lessons too late to avoid the implants that weren’t doing well. You need larger numbers to be able to detect these changes quickly.”

“National registries have given us surveillance of implant performance. Also, removal of selected devices that have been proven to be inferior…not because of a single registry data point, but because it focused attention on that design and allowed questions to be asked about what’s going on. So more important than the answers provided are the questions. We don’t have the resources to study everything in detail and put the necessary effort into the surveillance of new designs.”

“I agree with the comments about ease of revisability and the fact that they get a bad rap because of that. But this just shows the kind of thing that can be done with studies looking at low and high volume instances. Higher volume improves the results of some of these implants, showing us some that are excellent devices, but are more demanding…others that are more forgiving where lower volume surgeons can get better results with less experience…and then poorly designed implants that fail in everyone’s hands.”

“We don’t have the personnel to track every single arthroplasty patient through the years. We may be able to use patient reported outcomes to decide who needs follow-up and who we can leave alone. Registries can do a lot of things we can get done with single institutional efforts.”

Moderator Trousdale: “David Murray, I take issue with your statement that it doesn’t allow us to compare implants. What they don’t tell us is ‘why.’ So the benefit of the Marmor experience is that where it had a high failure rate it allows those surgeons to explore why it had that high rate in their hands versus someone else’s.”

Mr. Murray: “I agree. So the registry data is of limited value. Many trainees coming up to their exams—who used to learn the literature and now learn the registry—quote it. In Australia the trainees learn that the Repicci is the best performing uni in Australia and they will go with that.”

Moderator Trousdale: “Your issue is how people interpret the data.”

Mr. Murray: “People think it’s the truth because the numbers are so large and the p-values so small.”

Moderator Trousdale: “David Lewallen, is it the best way to monitor new innovation?”

Dr. Lewallen: “It’s certainly not the only way. There are a variety of things that should be done with new technology, such as RSA studies. But at some point the decision has to be made to release the implant for general usage.”

Moderator Trousdale: “Tell us about the finances of the American Registry and the ownership of that data.”

Dr. Lewallen: “Individual surgeons and hospitals, etc., have access to their own data, and will have a very sophisticated technique available online for being able to review. It’s a not-for-profit organization with multiple stakeholders that is supported by all of the organizations that have a stake in this. There is also representation from the hospitals, private payers, a patient advisory board…so this is owned by the community.”

Mr. Murray: “I don’t know one implant that’s been identified as a poor implant by a registry before it’s been identified by surgeons. The problem is that manufacturers tend not to listen to surgeons. Also, in the UK we were led to believe that our data wouldn’t become public. Now the government is forcing us to make all the revision data from all surgeons freely available.

Moderator Trousdale: “So what happens in a suburb of London with a surgeon who is doing unicompartmental knee replacement with severe patellofemoral arthritis and he’s got a 20% failure rate at 10 years? How does the UK handle that surgeon?”

Mr. Murray: “In the UK they identify outliers—three standard deviations above what you would expect. They report that to the surgeon and the surgeon should do something about it. It’s now being reported to the managers, which is unjustifiably changing surgical practice. In the future it’s going to be reported generally. I support the registries giving surgeons their data and reinforcing if they are outliers. The Swedish experience shows that this is what works. That’s why the revision rate is so low in Sweden…because data is fed back to the surgeons. If all the data goes public then surgical practice will change. Surgeons won’t want to operate on difficult cases and they won’t use conservative procedures, so you must be careful with this data.”

Moderator Trousdale: “Thank you.”

Uncategorized