In November, I wrote a letter to the editor of the *Journal of Clinical Epidemiology. *This letter has been accepted, but not yet published.* *Last week, the journal published a strongly worded response written by the original authors of the paper I am responding to, *without yet publishing my letter *(!). I made them aware of this on Thursday, and I have now waited five days for my letter to appear on their website. Since it is yet to appear, I feel the need to go on record and explain the situation.

In August, JCE published a paper by Hoppe, Hoppe and Walter, which claims that the Odds Ratio can be interpreted as “the risk ratio, conditional on treatment and control resulting in different outcomes”. From reading the title, I knew immediately that if they were right about this, then my own papers had to be wrong. I therefore felt a very strong need to understand the claim, and respond if necessary. The original paper by Hoppe et al is available at https://www.ncbi.nlm.nih.gov/pubmed/27565975 (unfortunately, behind a paywall)

After reading the paper, I knew within 30 minutes that they were confused and wrong about the central claim in the paper. First of all, the discussion is framed in terms of “matched pairs”, which have nothing to do with odds ratios, or with the study designs and regression models from which odds ratios are commonly estimated. If interpreted charitably, these “matched pairs” can be interpreted as individuals who are (somehow) matched on counterfactual response types, such that both the counterfactual under treatment and the counterfactual under placebo, are observed. Alternatively, they could be interpreted as individuals who are arbitrarily “matched” to a random other individual in the study (although the purpose or relevance of this “matching” will then not at all be obvious).

The biggest problem with the paper is that this focus on “matched pairs” hides one of the central assumptions that is necessary for their conclusion. Essentially, the interpretation of the odds ratio only works if the counterfactual outcome under exposure and the counterfactual under control are independent events, ie, if what happens to a person who is exposed is completely uninformative about what would have happened if, contrary to fact, he was not exposed. This is not a realistic assumption in most clinical applications of the odds ratio. I wrote a letter to the editor to point this out, which was accepted in December. Since it is not yet published on the website, I am making it available here through an Overleaf read-only link, at https://www.overleaf.com/read/nfqrnrbpdtwh

In response, Hoppe et al wrote a very strongly worded letter to the editor, available at https://www.ncbi.nlm.nih.gov/pubmed/28108351 . In this letter, they claim that “Therefore even the title of Huitfeldt’s letter is erroneous, let alone its content”. It feels slightly Kafkaesque that this letter was published before the letter it is responding to. Since the letter from Hoppe et al is now part of the public record and shows up for anyone who searches my name on Pubmed, I am writing this blog entry to respond.

In their response, Hoppe et al seem to make the argument that they have redefined the odds ratio such that it is now equal to “the risk ratio, conditional on treatment and control resulting in different outcomes” by *definition. *This new definition of the odds ratio then simplifies to the traditional odds ratio under certain independence conditions.

This new definition of the odds ratio is curious, to phrase it carefully. The object which they call the “odds ratio” does not correspond to the parameter which is estimated from logistic regression models or case-control studies, and it *is not a ratio of odds*. I wrote a second letter to the editor, submitted on Thursday but not yet accepted, which i am making available on Overleaf at https://www.overleaf.com/read/cjrmzfzdhhbf

I find this situation bizarre. The original article simply should not have made it past peer review. I pointed this out in a letter, and two very senior biostatistics responded with very sharp words. Readers who see the exchange will likely conclude that I am some sort of crank, on the basis of the status differential between me and two senior professors at McMaster University.

I am therefore trying to get other senior epidemiologists or statisticians to get involved and state for the record that from a methodological perspective, this is not ambiguous and not open to different interpretations. However, most senior scientists are conflict averse, and reluctant to get involved. If anyone has any suggestions for who might be interested in helping me, please get in touch..

(Addendum: The counterfactual independence condition which I provided above, is equivalent to assuming that individuals are “matched” randomly to other individuals in the study. It would therefore be correct to state that if you match patients pairwise using a random number generator and randomly assign one of them to treatment and the other to control, then the odds ratio is equal to the risk ratio conditional on the two paired individuals having different outcomes. However, this interpretation has no clinical meaning, as it conditions on being in a “discordant pair”, which is causally downstream from the patient’s exposure status).

Hello. Yes, it seems the authors created something new (a new version of the odds ratio): they clarified it in their reply to you. It seems to me your counter-example points out something interesting: there are cases where this new indicator basically does not make sense. I’m considering writing to the Journal as well.

LikeLike

On second thought, I’d say the second definition they propose is the one of the Conditional odds ratio, under the assumption of constant ORs among strata (the first one referring to the Marginal OR instead). I wrote to the Journal, however.

LikeLike

I still think their second definition is very weird as a definition of a parameter. It is true that it is a consistent estimator of the conditional individual odds ratio under the assumption of constant OR. But it is very weird to define an individual-level parameter using attributes that are only defined at the population level. Moreover, if we define parameter in such a way, the definition is “intrinsically tied” to the conditions that are necessary for identification. In my opinion, this gets the logic completely backwards.

LikeLike

Maybe I understand your point: they define their second OR as a ratio of p(10) and p(01), but such quantities depend on the population distribution (if you like, on which strata are included in the population). However, assuming constancy of the OR within each stratum, the ratio between p(10) and p(01) is constant across different populations (i.e., groups of strata). It seems to me without constancy of the OR this quantity varies basing on which strata we include in the dataset indeed.

LikeLike