One of the many criticisms of early cost-effectiveness studies describing cellular and/or tissue-based products (CTPs), especially those that involved modeling, is that time horizons were too short—effectively the length of the trial for those studies based on “short” clinical studies. By short I mean something like 12 or 16 weeks, which is common for RCTs. (A detailed review published 5 years ago discusses this very problem for cost-effectiveness modeling.)
For modeling single wounds from an RCT the best choice for a horizon time is probably a year because many wounds have healed and complications from unhealed wounds have manifested. However, extending how wounds will heal and complications will occur from 12 weeks of data is challenging. In the absence of RCTs with follow-up times of a year or more (rare), the best method is to find a large observational study with data available after 1 year of standard of care (SOC) treatment and attempt to match your study population. This is essentially what was done in the Markov microsimulation of Amnioband (dHACA) versus SOC. One can think of it as a form of external model calibration in which the key parameters—healing, amputations, and so forth—derive their values at 1 year from the observational study. Then it is a simple matter to calculate the necessary cycle probabilities of these events beyond 12 weeks. The downside to this approach is that the population match won’t be perfect and so there is some uncertainty about the final results. Comprehensive sensitivity analysis can provide some idea of what happens when assumptions are in error.
Another method is to use the 12-week data in terms of probabilities for key parameters to directly extend out to 1 year; for example, fitting a mathematical expression to healing data. This is essentially what we did in exploring the cost utility of BlastX, a biofilm disruption agent, because a match to other studies involving chronic wounds could not be found. The mathematics were particularly convoluted due to the small sample size of the original RCT. Consequently, there was a great deal of uncertainty of the results. For large sample sizes (≥200) this method would likely work much better.
Microsimulation, which is rarely done in wound care, offers several advantages compared to standard Markov models. First, the health states a wound experiences can be tracked over time. In other words, the model “remembers” from cycle to cycle what happened to the wound. This is useful because we could set limits to the number of times a wound experiences a particular event or even use the number to trigger cost or health consequences. As an illustration, one could control the profile of recurring wound opening or infections, although this requires sufficient data from the literature to model it. Second, one can program in population characteristics to follow particular patient subsets or define what happens to them. One could thus, in theory, take a patients’ age or time they have had a comorbidity such as diabetes, to modify what happens over longer periods of time in terms of mortality, new wound development, or wound complications, or model the consequences of progressive amputations.
Personally, having discovered microsimulation several years ago I am loath to go back to simple Markov modeling. It would be like going back to a Walkman once I had acquired an iPod.