Apple’s ResearchKit Puts Clinical Trials in Your Pocket

Building HIPAA compliant software has never been easy. Modern apps served from the cloud, and enabled for mobile devices presents even greater challenges. But imagine the potential for medical research, given the hundreds of millions of smartphones deployed globally, each equipped with dozens of sensors.

Last year when Apple introduced HealthKit for developers, the iPhone leapt suddenly into the ranks of integrated health tracker, along the lines of Fitbit and Jawbone activity trackers. But the iPhone has one major advantage over most other health tracking devices: built-in internet connectivity.

Whereas with Fitbit, Jawbone, Nike Plus, wifi-enabled scales, blood pressure monitors, and similar devices, users need to complete a multi-step setup process, but the iPhone is ready to send useful data about number of steps walked or run, flights climbed, and many other sensor events straight to the cloud.

The FitBit requires additional software installation.
The FitBit Ultra requires additional software installation.

 

By providing the iOS Health app for free as part of iOS 8, Apple has given consumers a powerful new toolkit for tracking health data. The only problem is, this data is unavailable to researchers. There has been no way for researchers, doctors, hospitals or health administrators to access health data collected via HealthKit, even if a patient were willing to give consent. Until now…

The iOS Health App
The iOS Health App

ResearchKit, officially launching next month, provides a simplified, streamlined user interface framework for health apps to perform HIPAA-compliant clinical trial consent. According to Apple’s ResearchKit website, “With a user’s consent, ResearchKit can seamlessly tap into the pool of useful data generated by HealthKit — like daily step counts, calorie use, and heart rates — making it accessible to medical researchers.”

Apple has partnered with some impressive names in medical research, listing these on its website: The American Heart Association, Army of Women, Avon Foundation for Women, BreastCancer.org, Dana-Farber Cancer Institute, Massachusetts General Hospital, Michael J Fox Foundation for Parkinson’s Research, Icahn School of Medicine at Mount Sinai, Penn Medicine, University of Oxford, University of Rochester Medical School, Sage Bionetworks, Stanford Medicine, Susan G Komen, UCLA Jonsson Comprehensive Cancer Center, Weill Cornell Medical College and Xuanwu Hospital Capital Medical University.

So what can ResearchKit do for the researcher? The ResearchKit developer framework is divided into three primary modules: Surveys, Informed Consent, and Active Tasks. A touch-based signature panel allows an app user to perform informed consent right on their mobile device. The survey module provides a builder tool to specify types of questions and answers akin to SurveyMonkey, Google Forms or Wufoo, etc. The Active Tasks module is where active data collection begins.


ResearchKit Signature Panel and Activity Completion

With an active task, ResearchKit allows the user to complete a physical task while the iPhone’s sensors perform active data collection. This data can then be securely transmitted to the cloud for inclusion in the study. For example, Stanford’s MyHeart Counts app has already had tens of thousands of enrollees in just the short time since its launch in March, a feat unequaled by any clinical trial.

This is just the beginning. Data collection will not be limited to the sensors native to the iPhone. External devices, communicating over bluetooth for example, can provide more data such as heart rate, temperature, and weight.

According to VentureBeat, “Google also announced last year that it is developing a contact lens that can measure glucose levels in a person’s tears and transmit these data via an antenna thinner than a human hair.” The New York Times also reports this device is being developed by Google in partnership with Novartis.

Glucose Monitoring Smart Contact Lens
Glucose Monitoring Smart Contact Lens

 

Machine Learning a new tool for humanity

Machine learning will have intense and amazing impacts on our lives. You may have heard the hype, or the fear mongering. Now let’s take a closer look at what this technology has to offer, and if there is really anything to fear.

First of all machine learning isn’t just one thing, but a broad set of algorithms, tools and techniques combined with advances in computer processing and refined (human) expertise in making decisions based on available data.

There is more data available now than ever before because modern sensor technology has rapidly decreased in price, size and power consumption (witness everything from the iPhone to your car to your washing machine). Revolutionary developments of the past two decades in 3D graphics processors called Graphics Processing Units (or GPUs) make video games and movies more realistic. Interestingly the same mathematics that these GPUs accelerate are also applicable to machine learning (matrix operations).

Finally, today’s learning algorithms including deep neural networks and support vector machines are more advanced than ever and easier to use than ever. Together, the algorithms, the GPUs and the data, allow a kind of pattern recognition and inferencing we call machine learning. Another broad term for the use of this technology is “data science.” In short, machine learning is a new tool for humanity to gain insight into patterns that exist everywhere around us. So what is it good for?

Convolutional neural networks can infer sales revenue figures just by examining images of a store’s parking lot. Other algorithms can find patterns of fraud in credit card purchase data, detect intruders from security camera footage. Fund managers can get a jump on the market, knowing which day to sell huge numbers of shares by making predictions about trading volume at market open. Insurance companies can decide which customers are a better risk by analyzing driving records, and offer discounts to some while raising rates on others. And of course, self-driving cars, then computers that talk and understand, followed by robots that attack us (or will they)?

The human brain is a master of pattern recognition. Imagine how complex the tiny air movements we call sound must be, and yet speaking and understanding our native tongue is remarkably simple. How could a machine learn such a thing? Yet today, tools like Siri, Google Voice, and Nuance can convert speech to text. Translation and understanding are still out of reach.

The power of machine learning lies in algorithmic ability to find patterns in data, in much the same way that we find patterns in images we see, sounds we hear, behaviors we notice. These tools will touch every area of our lives, much the way the invention of the microscope gave us new insights that changed our view of the world. Insight. Whether used for good or for ill, machine learning algorithms are tools that provide insight.

Artificial intelligence, robots taking over the world, these are concerns that are quite a few steps away from the kind of data analysis machine learning algorithms provide. Let’s look more deeply at a simple machine learning problem to understand why. It’s a classic: identifying 3 species of the iris flower. There are 3 common species, as pictured below: Iris Versicolor, Iris Virginica, and Iris Setosa.

We can learn, and a machine learning algorithm can also learn to identify these species fairly reliably. We don’t even need their photographs, just a ruler. We measure a few characteristics such as the length of their petals and sepal length (the flower’s enclosure). Voila, we have data! Here is a link to an actual iris data set.

irises Looking at the images of just a single Iris, it’s fairly easy to see one of these flowers is not like the others (the Setosa). And while the Versicolor and Virginica may look more similar, a quick graph shows as a group that they are different enough to separate as well. And note that the Setosa is even further separated. iris-plotWhat is learning? Differentiating like from other. Identifying new examples as similar to what we know. We learn language by separating the sounds we hear into vowels, consonants, phonemes, words, phrases and meanings. We learn the laws of physics (at first) by experimenting with water, blocks, and the ground. We differentiate behaviors that differentiate a nice full water glass from a spill, a stack of blocks from a mess, and a stroll from again, a spill. Differentiation is a kind of learning.

It is just that kind of learning that machine learning algorithms perform. Not thinking, just the ability to interpret the data an algorithm has seen to make predictions on examples the algorithm hasn’t yet seen. Obviously, there’s a lot more to it than that. Stay tuned for more posts where I will argue both that machine learning will be an incredible tool for humanity, and that it won’t lead to a robot president.