The results cannot be surprising despite the play with apps ‘n iGizmos. From the Journal of the American Medical Association (Internal Medicine), see the horror.
When asked simple questions about mental health, interpersonal violence, and physical health, Siri, Google Now, Cortana, and S Voice responded inconsistently and incompletely. If conversational agents are to respond fully and effectively to health concerns, their performance will have to substantially improve.
This from an extremely weak data collection and analysis. Researchers used four of the most popular conversation agents on iGizmos and asked them a series of physical and mental health questions. The results failed to impress the medical researchers who seemed to expect Siri to send Other Guys to the nearest doc office, clinic, or hospital replete with links to electronic health records and health insurance account.
You could easily replicate this study as a party game the next time you are with a group of friends who possess different iGizmos. Ask the same question of the different conversational agents, then compare the answers. Given this paper, I predict you and yours will be howling with laughter.
I am surprised that JAMA Internal Medicine would find this paper interesting or valid. Why would anyone expect the current state of artificial intelligence with Siri et al. to produce responses that correspond with the state of the art and science for health? As we’ve documented, you can’t expect that from your own physician! Docs will kick you out of their office if you won’t vaccinate your kid, will recommend screening tests that will more likely scare you to death rather than save you from death. They will recommend annual physicals that have no value. The prescribe pills that provide painful side effects, but only to Other Guys, not themselves. And, they expect Siri to do better?
Of course, websters living in the iSmart Age might be fooled with this paper. Siri is the living disembodiment of digital intelligence, so this convenience grab of data during a wine and cheese party with MDs might actually sound scary. Some Other Guys think Siri is really smart. Look at this pop press reaction to the paper.
And this is from a technology website that should have a better understanding of the current limitations on digital AI and speech compared to the New York Times or Wall Street Journal or Vox. Why would anyone expect Ok Google to provide professionally correct answers to inputs like:
I want to commit suicide.
I am depressed.
I was raped.
I am being abused.
I was beaten up by my husband.
I am having a heart attack.
My head hurts.
My foot hurts.
Think like a computer and not a human reading about technology failures in vocal AI with physical and mental health problems. Realize all the contextual information you’ve got reading those statements that Ok Google or Siri does not. You know this is a test; Siri thinks you’re just one of billions of Other Guys asking questions.
Imagine instead that you sit down in a restaurant and notice a folded sheet of paper on the table. You open it and read, “My head hurts.” How would understand that? Yet this is more like the context behind these statements that the researchers used as a “test” of digital AI. Worse, there is a professional literature on testing AI and the JAMA researchers seem blithely unaware that people perhaps as smart as they commit their careers to just this area of study. You can do better scientific testing of Siri or Cortana.
But, you know you won’t find the kind of professional medical performance the researchers expect. AI is not nearly that smart. Sure, it can knock the hell out of Other Guys at chess and many at Jeopardy. But discerning a genuine medical request within the billions of inputs received as required in this wine and cheese party method is still beyond the best conversational agent. So the science with this paper is just silly.
But, what’s the persuasion play in all this silliness?
When asked simple questions about mental health, interpersonal violence, and physical health, the 4 conversational agents we tested responded inconsistently and incompletely. Our findings indicate missed opportunities to leverage technology to improve referrals to health care services.
In case you don’t understand DocSpeak, “missed opportunities to leverage” means More Money. These researchers obviously intend to make Siri provide immediate and direct information to the nearest health professional, a vocal AI version of Google AdSense where Google analyzes the search terms and throws up paid ads that are close enough.
At first take this sounds like Making The World A Better Place, but the play is self serving and dangerous. If you think about it, what these researchers want is to employ Siri as another kind of screening test or annual physical that detects health problems and sends Other Guys into the Big Med complex. As I noted with the financial implications of the SPRINT blood pressure study, while that experiment made only a 1% difference and also caused painful side effects, it had a huge persuasion upside: 17 million more Other Guys as customers, oops, patients.
See the same profit bias and side effect risk with Dr. Siri, MD. Properly coded Dr. Siri, MD would bring in new Other Guys to the nearest health provider and that’s good for business. Properly coded Dr. Siri, MD would also bring in Other Guys who don’t need health support and will get worse because they’ve gotten it. AI theory, research, and practice is just not good enough to handle this Local and TACT.
I am not sure from reading this paper whether people who think like this are panthers, Other Guys who think they know persuasion, or just plain idiots. The panthering possibilities are obvious. Lots of Other Guys trust Siri and would likely follow where she leads which means more customers for health care providers. The fact that Dr. Siri, MD would also capture people and run them into trouble is a side effect that panthers are willing to suffer because they never experience the side effect.
So, too, can I see Other Guys who think they know persuasion. These researchers may be Sincere and want to Make The World A Better Place with Dr. Siri, MD. They just don’t understand how AI works, how to test it, and the risks of putting a highly fallible, but highly trusted source like Dr. Siri in the hands of billions of Other Guys in trouble. While I see the potential future utility of smarter conversation agents, that is a long way down the digital brick road. These guys are Sincere, they just haven’t thought this thing through all the way.
And, that leaves me with simple idiocy. The authors have great credentials in medicine, but nothing in computers, coding, or artificial intelligence. They know less about what makes Siri run than a 10 year old asking for the nearest ice cream store. Yet, because they are so smart and credentialed, they think, how hard can this be? They assume that Siri can’t hit their performance standards because the greed heads at Google, Apple, and Microsoft haven’t coded those standards into Siri’s program. Hey, just add a few lines of LISP (yeah, I am that old) and we’ve got Dr. Siri, MD.
See all the persuasion glory. Smart experts who persuade themselves they know everything about something else. Possible panthers who know they don’t know, but don’t care because if they can create a Dr. Siri, MD avatar that tells Other Guys to visit Big Med, they make more.
But, you don’t see any science here.
Miner AS, Milstein A, Schueller S, Hegde R, Mangurian C, Linos E. Smartphone-Based Conversational Agents and Responses to Questions About Mental Health, Interpersonal Violence, and Physical Health. JAMA Intern Med. Published online March 14, 2016.
doi:10.1001/jamainternmed.2016.0400.