A new study of e-cigarettes’ efficacy in smoking cessation has not only pitted some of vaping’s most outspoken scientific supporters against among its fiercest academic critics, but also illustrates most of the pitfalls facing researchers on the topic and the ones – including policy-makers – who must interpret their work.
The furore has erupted spanning a paper published in The Lancet Respiratory Medicine and co-authored by Stanton Glantz, director of the Center for Tobacco Control Research and Education at the University of California, San Francisco, plus a former colleague – Sara Kalkhoran, now of Harvard Medical School, who is actually named as first author but does not enjoy Glantz’s fame (or notoriety) in tobacco control and vaping circles.
Their research sought to compare and contrast the success rates in quitting combustible cigarettes of smokers who vape and smokers who don’t: in other words, to discover whether usage of e-cigs is correlated with success in quitting, which might well mean that vaping can help you stop trying smoking. To do this they performed a meta-analysis of 20 previously published papers. That is, they didn’t conduct any new information directly on actual smokers or vapers, but rather made an effort to blend the final results of existing studies to see if they converge on a likely answer. This is a common and well-accepted approach to extracting truth from statistics in lots of fields, although – as we’ll see – it’s one fraught with challenges.
Their headline finding, promoted by Glantz himself online as well as by the university, is that vapers are 28% more unlikely to prevent smoking than non-vapers – a conclusion which would suggest that vaping is not only ineffective in quitting smoking, but usually counterproductive.
The effect has, predictably, been uproar from your supporters of E Cig Price inside the scientific and public health community, particularly in Britain. Amongst the gravest charges are the types levelled by Peter Hajek, the psychologist who directs the Tobacco Dependence Research Unit at Queen Mary University of London, calling the Kalkhoran/Glantz paper “grossly misleading”, and through Carl V. Phillips, scientific director from the pro-vaping Consumer Advocates for Smoke-Free Alternatives Association (CASAA) inside the U.S., who wrote “it is apparent that Glantz was misinterpreting the data willfully, instead of accidentally”.
Robert West, another British psychologist and the director of tobacco studies at a centre run by University College London, said “publication of this study represents a major failure from the peer review system within this journal”. Linda Bauld, professor of health policy at the University of Stirling, suggested the “conclusions are tentative and sometimes incorrect”. Ann McNeill, professor of tobacco addiction within the National Addiction Centre at King’s College London, said “this review will not be scientific” and added that “the information included about two studies that I co-authored is either inaccurate or misleading”.
But what, precisely, would be the problems these eminent critics find in the Kalkhoran/Glantz paper? To respond to a few of that question, it’s necessary to go beneath the sensational 28%, and examine that which was studied and just how.
Meta-analysis is a seductive idea. If (say) you have 100 separate studies, every one of 1000 individuals, why not combine those to create – in effect – one particular study of 100,000 people, the outcomes from where should be significantly less vunerable to any distortions which may have crept into an individual investigation?
(This could happen, for example, by inadvertently selecting participants using a greater or lesser propensity to stop smoking because of some factor not considered through the researchers – an instance of “selection bias”.)
Of course, the statistical side of a meta-analysis is quite more sophisticated than simply averaging out the totals, but that’s the overall concept. As well as from that simplistic outline, it’s immediately apparent where problems can arise.
If its results should be meaningful, the meta-analysis must somehow take account of variations in the style of the patient studies (they might define “smoking cessation” differently, for example). If this ignores those variations, and tries to shoehorn all results in to a model that a number of them don’t fit, it’s introducing their own distortions.
Moreover, in the event the studies it’s according to are inherently flawed in any respect, the meta-analysis – however painstakingly conducted – will inherit those same flaws.
This is a charge produced by the facts Initiative, a Usa anti-smoking nonprofit which normally takes an unwelcoming take a look at e-cigarettes, in regards to a previous Glantz meta-analysis which will come to similar conclusions for the Kalkhoran/Glantz study.
In a submission this past year towards the U.S. Food and Drug Administration (FDA), answering that federal agency’s call for comments on its proposed e-cigarette regulation, the Truth Initiative noted it had reviewed many studies of e-cigs’ role in cessation and concluded they were “marred by poor measurement of exposures and unmeasured confounders”. Yet, it said, “many of these have already been included in a meta-analysis [Glantz’s] that claims to reveal that smokers who use e-cigarettes are less likely to quit smoking when compared with those who do not. This meta- analysis simply lumps together the errors of inference from the correlations.”
It also added that “quantitatively synthesizing heterogeneous studies is scientifically inappropriate and the findings of such meta-analyses are therefore invalid”. Put bluntly, don’t mix apples with oranges and be prepared to get an apple pie.
Such doubts about meta-analyses are far away from rare. Steven L. Bernstein, professor of health policy at Yale, echoed the facts Initiative’s points as he wrote within the Lancet Respiratory Medicine – the same journal that published this year’s Kalkhoran/Glantz work – the studies included in their meta-analysis were “mostly observational, often without control group, with tobacco use status assessed in widely disparate ways” though he added that “this is no fault of [Kalkhoran and Glantz]; abundant, published, methodologically rigorous studies just do not exist yet”.
So a meta-analysis could only be as good as the investigation it aggregates, and drawing conclusions from this is simply valid when the studies it’s based on are constructed in similar methods to each other – or, a minimum of, if any differences are carefully compensated for. Needless to say, such drawbacks also pertain to meta-analyses that are favourable to e-cigarettes, including the famous Cochrane Review from late 2014.
Other criticisms in the Kalkhoran/Glantz work rise above the drawbacks of meta-analyses in general, and focus on the specific questions posed by the San Francisco researchers and the ways they attempted to respond to them.
One frequently-expressed concern has been that Kalkhoran and Glantz were studying the wrong people, skewing their analysis by not accurately reflecting the real quantity of e-cig-assisted quitters.
As CASAA’s Phillips indicates, the e-cigarette users in the two scholars’ number-crunching were all current smokers who had already tried e-cigarettes when the studies on the quit attempts started. Thus, the analysis by its nature excluded people who had started vaping and quickly abandoned smoking; if these people exist in large numbers, counting them would have made e-cigarettes seem a more successful path to quitting smoking.
Another question was raised by Yale’s Bernstein, who observed that not all vapers who smoke are attempting to give up combustibles. Naturally, those who aren’t trying to quit won’t quit, and Bernstein observed that when these folks kndnkt excluded through the data, it suggested “no effect of e-cigarettes, not that electronic cigarette users were more unlikely to quit”.
Excluding some who did find a way to quit – and after that including those who have no goal of quitting anyway – would likely appear to change the result of research purporting to measure successful quit attempts, although Kalkhoran and Glantz debate that their “conclusion was insensitive to a wide range of study design factors, including whether or not the study population consisted only of smokers interested in smoking cessation, or all smokers”.
But there is also a further slightly cloudy area which affects much science – not only meta-analyses, and not simply these specific researchers’ work – and, importantly, is often overlooked in media reporting, along with by institutions’ publicity departments.