In the not-too-distant future, you wake up in the morning to the sound of your phone chirping. It’s a reminder from your Digital Health Assistant (DHA), Dr. Healy, to take your thyroid medication at least a half hour before breakfast. You’re in the process of downing the pill with a glass of water when you suddenly feel lightheaded. As you steady yourself on the counter, your DHA, Dr. Healy, in its melodically auto-tuned voice, asks if you’re okay.
You answer that you’re feeling faint. After a brief back-and-forth discussion to learn more, Dr. Healy takes the symptoms you just described and cross-references them against the information in your Personal Health Record and the vital sign readings—blood sugar, heart rate, etc.—collected by the med-tracker wristband you got for your birthday last month. Now here’s where things get interesting.
After assessing the situation, the artificial intelligence powering Dr. Healy comes to the conclusion that your blood sugar is low and that’s the most likely cause of your lightheadedness. Just eat a banana, and you’ll be fine. Only given your family history and pre-existing conditions, there’s a substantial chance that you may have something far more serious: a potentially fatal cardiac arrhythmia.
So what does the A.I. inside Dr. Healy do? That depends entirely on what the A.I. has been programmed to value. If the A.I. values you, it will first explain to you all the likely conditions. It will then present you with all the available treatments and provide a detailed analysis of the costs and benefits for each one based on what it already knows about your personal preferences (e.g. you’re a voracious reader, so you would refuse any drug with side effects that could impact your eyesight). In the end, it will help you determine the best solution for you.
If the A.I.’s algorithm is instead optimized solely to deliver efficient health outcomes that minimize the cost to society, the A.I. will first assess your “worth” according to some purely quantitative metric. If that number doesn’t equal the cost of treatment, then the A.I. will tell you to go ahead and have that banana because saving your life simply isn’t cost-effective. It will never even present you with other possibilities because it never considers what is actually best for you; the thought would literally never cross its “mind”.
A.I.-driven innovations like automated triage and personalized medicine will lead to better health outcomes for everyone while simultaneously lowering costs so that more people will be able to receive care than ever before. That’s a wonderful thing, but it comes with real and substantial risks.
As artificial intelligences become more and more intertwined in our everyday lives, ethical dilemmas like the one presented above will become more and more common. To solve these ethical dilemmas, we will have to find a way to not only teach artificial intelligences how to think, but also how to feel. We will need to teach Dr. Healy some empathy and compassion!
As a human being and the founder of a health technology company, compassion is a topic that I think about every day. To live by that, I make sure that at HealthTap, compassion is the heart of our technology. Yes, we build products that optimize for positive health outcomes, but we also put just as much emphasis on the care side of the equation.
This human-centric design philosophy inspires us to build products that not only function properly, but also feel right. The question is: How do we actually code for something like compassion? Code is just machine language, and something qualitative like compassion is, by definition, very challenging to adequately describe using language alone.
We solved that conundrum when building our own A.I. by coding for learning so that we could teach the A.I. compassion instead. We used the data collected from billions of doctor-patient-machine interactions on the HealthTap platform as our foundational training set, but we knew that experience is always the best teacher. With that in mind, we introduced our A.I. to the real world to interact directly with both doctors and patients.
By keeping our basic service free to everyone—regardless of race, sex, or creed—we taught our A.I. that healthcare is a fundamental human right. By making healthcare more accessible through services like telemedicine, we taught our A.I. to always put patients first and to meet them where they are. By enabling people to choose their own course of care after being presented with relevant choices and informative data, we taught our A.I. to value freedom of choice. By inserting delightful user experiences in our apps that have no clinical value and exist only to put a smile on our users’ faces, we taught our A.I. that little gestures can make a big difference. Finally, by always reaching out to our users a few days after their doctor consultations in order to ask how they’re feeling, we taught our A.I. about the connection between compassion and better health outcomes.
Our A.I. continues to learn from each and every interaction on our platform. What’s more, it can see patterns that we mere humans cannot, and it’s already begun to provide us with new insights into a variety of medical conditions. In time, it will provide us with new insights into the human condition as well. We will then use those insights to further refine our A.I.’s compassion algorithm, creating a virtuous cycle in every sense of the term.
At HealthTap, we’re working to build a future where machines (just like humans) are both fantastically intelligent and profoundly compassionate. A future where you get both the banana and the right treatment for you, so you can live a healthier, happier, and longer life!
— Ron Gutman, CEO & Founder, HealthTap