I’ll call it Massive Online Persuasion while the authors call it Massive Open Online Interventions. Tomato or Tom-ah-to, either term means you build an online persuasion engine that any Other Guy anywhere in the world can access anytime. While MOPs can be persuasive for the vampires running them (see Lumosity and its misadventure), rarely do we see iInterventions that do much more than take up bandwidth. Today we’ll look at an interesting MOP (or MOOI to place nice with the researchers) aimed at global (!!!) persuasion for tobacco cessation.
First, a quick background on smoking and tobacco. Smoking in particular is the single most deadly lifestyle behavior you can choose. Smokers lose 8-10 years of life span and suffer greatly as they near that end. Most smokers want to quit, but cannot because of the addictive properties of nicotine and the power of habit. The best research on quitting (randomized controlled trials) shows that one year quit rates, regardless of the intervention type (patches, gum, therapy, cold turkey) rarely exceed 10-15% and that’s verified with chemical testing. Smoking is the monster morbidity and mortality TACT of our age. While the Western world has engaged a fairly successful and enduring campaign to reduce smoking, the Rest Beyond The West are smoking like Humphrey Bogart in Casablanca.
Second, persuasion plays on quitting are expensive and eternal. Typically, face to face interventions work best so you can truly monitor the Other Smokers and stay on their case with lots of persuasion. You need personnel. Benefits. Offices. Schedules. And, you still only hit 10-20% verified quit rates. Imagine if you can build a persuasion engine, put it online as a MOP or MOOI, and get quitting. You’d save a lot of resource. But, would you hit the quit TACT?
Here’s a study that says, yes, or at least maybe.
The data presented in this study stem from participants recruited between September 19, 2008, and March 18, 2011. We present numbers for the total combined enrollment of the TC4 study (September 19, 2008–March 18, 2011) to demonstrate the scale of TC4 as a MOOI. However, given that we have published data from participants during the initial 12 months (TC4-A) elsewhere (Muñoz, et al., 2012), in this report, we analyze data only from participants enrolled in the last 18 months (TC4-B). Follow-up assessments were continued until December 3, 2011. Thus, some participants did not reach the 12-month follow-up.
Recruitment. Our principal means of recruitment was a Google Adwords campaign that displayed sponsored links to the English or Spanish versions of our Web site (http://www.stopsmoking.ucsf.edu and http://www.dejardefumar.ucsf.edu) in response to Google searches related to smoking cessation.
If you want to understand this persuasion engine, just visit the websites noted. That’s how they do it. To get Other Smokers into the MOP/MOOI, they use Google Adwords. Then they let the engine run for three years and see what happens. Let’s count costs and noses.
The cost of keeping the San Francisco Stop Smoking site up and running for 30 months was less than $100,000 and was covered by a small local grant and a donation from the Brin Wojcicki Foundation. Approximately $50,000 was devoted to hosting and technical servicing of the site, and $50,000 was devoted to paying a part-time research assistant to monitor the site and respond to administrative queries from users (such as retrieving lost passwords). The cost of the Google ads to recruit the smokers was an additional $100,000, which was covered entirely by a Google Adwords grant. Thus, the total cost of providing this site to a total of 292,978 visitors from 168 countries and territories was $200,000, or less than a dollar per visitor.
You could not operate a county-wide or large city cessation clinic for a year on $200,000. The personnel, administrative, and resource costs would be much larger. So, this persuasion engine for quitting draws in Other Smokers with Google Adwords and averages about 100,000 visitors a year.
Now, the counts get ugly. As with any online operation, you get a lot of visitors, but they don’t follow the Cascade all the way, bailing out usually after the first visit. So, yeah, we start with 300,000 visitors, but how many actually get the persuasion Full Monte for quitting. Consider this flow chart. Focus on the right hand side.
Start with 300,000 visitors, get 18,000 enrolled in the persuasion engine, and finish with 1,400 with complete data. The classic footprint of these massive online operations. You lure in a lot, lose most immediately, finish with less than 1%. Let’s count the change on that 1%.
Note my red highlights. The top half is the most favorable definition of a quit while the bottom half is more conservative. And, you’ll notice that the numbers aren’t adding up right. The Attrition flow chart shows only 1,400 Other Smokers completed everything out to 12 months, but the Quit Table counts as many as 7,000+ quitters. The researchers obviously included Other Smokers with incomplete data, those who dropped out before 12 months.
The top half count produces astounding quit rates as high as 50%. Yet we know from gold standard RCTs that chemically verified quit rates average around 10-20%. How did these guys get a 50% quit rate?
Self report and playing around with missing values.
If you look at the bottom half of the table that counts missing values as Smokers, note that the quit rate plummets from 50% to around 5%. And, that’s still self report. Nobody provided a saliva sample to test for the presence of tobacco chemicals. If you read those gold standard RCTs on quit interventions you will often find self report quit rates that are 2-3 times higher than the chemically verified quit rates. Hey, if you’ve ever been a smoker you know that if you stopped for a couple of days or even weeks, it felt like a lifetime. You had “quit.” Sorta.
When you look at the data from the MOP MOOI, you should have more questions than answers. Yeah, they get a lot of visitors and it doesn’t cost much money. But, maybe 1% of the Other Smokers truly absorb all the persuasion in the iIntervention and then depending on how you classify the data, you get self report quit rates of 50% to 5%.
I’m dubious of either number. Addicts lie like dogs. As an ex-smoker, I know the psychology. I smoked for 13 years, loved the first three, then spent the next ten trying to get the monkey off my back. I quit a lot. A lot. And I’d tell you that with a straight and sincere face. You would have believed me. Then you’d see me smoking the next time we met. Changed my mind. Still can quit anytime. Lot of stress right now. Quit when it chills, baby.
Let’s get hard core on the count. Only 1400 Other Smokers completed the persuasion play. Give the benefit of the doubt to the researchers and assume that 5% truly quit even if based only on self report. That’s 70 quits. Now, recall that this thing ran for three years and cost $200,000. That $2,800 per quit. Let’s be real generous and include the count from the first month quit report, that’s 3,300 Other Smokers. Take 5% as quitters and you’ve got 165. The cost is $1,200 per quit.
Now complicate the Local. We are dealing with a life and death TACT where A Little Means A Lot. These are Other Guys who will live healthier and longer lives. Not many, but a few. We are now down to making value counts.
Put up this MOP MOOI persuasion engine at a cost of $70,000 a year and get 70 to 165 quits in 3 years. Is that Little enough to qualify as a Lot?
Ricardo F. Muñoz, Eduardo L. Bunge, Ken Chen, Stephen M. Schueller, Julia I. Bravin, Elizabeth A. Shaughnessy, and Eliseo J. Pérez-Stable. (2016). Massive Open Online Interventions: A Novel Model for Delivering Behavioral-Health Services Worldwide. Clinical Psychological Science, March 2016 4: 194-205, first published on May 13, 2015.
P.S. I’ve got to go Professor Poopypants on the reviewers of this paper. You cannot let smoking interventions ride into print based solely on self reports of quitting. I appreciate the sincerity of all involved on this large public health problem, but wishful counting will not advance the science or the persuasion. The cessation research community learned this bitter lesson many, many years ago. You’ve got to provide harder counts on this change. This is bad reviewing.
I’m not saying this paper should not have been published, but the reviewers should have insisted on other counts that provide some kind of triangulation in the absence of chemical testing. Self reports from addicts are delusional. And the researchers need to stop kidding themselves about the MOOI massiveness of this. They’ve got to get chemical tests. Simply because this is Internet 2.0 does not change the standards of science.