Skunked by statistics
Skunk, the powerful form of cannabis dominating the street drug market, is seven times more likely to cause psychosis than ordinary cannabis, scientists say.
Which might be fair enough - after all, that's what the press release from the Royal College of Psychiatrists says.
But, having read the BJP paper concerned, I'm not entirely sure that stacks up. The research is on the proportion of psychosis patients who use skunk, not the proportion of skunk users who develop psychosis. The actual findings, based on samples of a couple of hundred, are that 78% of cannabis users in the group of psychosis patients use skunk, compared with 37% of cannabis users in the control group. In common parlance, that means around twice as many cannabis-using psychosis patients use skunk in preference to regular cannabis as cannabis-using non-patients. In all, 45% of the psychosis patients and 24% of the non-patients use skunk (56.9% of the patients say they have used some kind of cannabis, and 62.5% of non-patients).
The seven-times claim seems to be based on the adjusted odds ratio of 6.8. I admit I'm not entirely familiar with the use of odds ratios, as they're not a common statistical tool in econometrics or physics, but it seems (from a purely arithmetical consideration) that they aren't such a direct measure of how much more probable something is in one group compared to another.
And, unless I'm missing something, there certainly seems to be no basis for flipping the causality around. You can't assume that if group A is N times more likely to do X, then doing X makes you N times more likely to fall into group A.
Am I missing something here? Or are a whole bunch of health journalists (and whoever wrote and approved that press release)?
Note I'm not saying there's no connection between extreme cannabis use and mental problems, but I do like to see some sort of accurate reporting on research, especially when it relates to something that's such a political football.
[LATER: After a bit more consideration, and a couple of glasses of Cabernet Shiraz, I think I understand how odds ratios work a bit better. It's a measure of relative probability that doesn't intuitively translate into absolute probabilities. It's the ratio of odds expressed as fractions (eg 3-to-2=3/2=1.5) not absolute probability (0.6). An odds ratio of N doesn't mean that the proportion of As doing it is N times greater than the proportion of Bs (or that N times as many As as Bs do it). For example, if p(A)=0.8 and p(B)=0.4, then the odds ratio N=(.8/.2)/(.4/.6)=6, even though there's just a factor of two in the absolute probabilities. Tricky things, statistics.
Thinking further, it seems to work if group B is taken as a general population, and A as a sub-set - all other things being equal, the odds ratio is a measure of the increased probability that a member of B with behavior X will join group A, compared with the probability of a member of B without that behaviour. If that's right, it's certainly not obvious from the reporting that that is what it means.
The point about causality remains. If a random writer is twice as likely to drink red wine than a random non-writer, that doesn't necessarily mean that drinking red wine doubles your chances of becoming a writer.]