Velocity : Meaning from Motion

When Velocity wanted to build a machine learning system to analyze human motion, Concrete Interactive incubated the innovation.

We often provide complete engineering resources to start-ups and enterprise that want to launch new endeavors. Velocity’s scale requirements: over 20 million new records per day with peak transactions of 80,000 per second and a five millisecond latency made this a fast-paced project involving deep-learning, iOS app development and extensive knowledge of real-time bidding systems for digital advertising.

Velocity’s insight was to build the “Operating System for Human Motion,” a suite of machine learning training tools, and real-time prediction based on motion sensor data from millions of smartphones. Their first application determines real-time attention based on the way a person physically moves with their phone, a metric Velocity calls “Receptivity.” In simple terms, the way you move says a lot about whether you’ll respond to an ad. For instance if you are driving a car, that’s a bad time. But a passenger would be much more likely to click and engage.

 

“The Concrete Interactive team was instrumental from concept to production launch, bringing new ideas, new architecture, and a really vast set of skills to bear when building the Velocity platform.”

 

Game Optimization: A Farmville Sheep (Case) Study

A sheep stands in his field. Munching. Baa-ing. Wait, is that—spaghetti on his head? A bib around his neck? Aww, that’s the cutest sheep I ever did see! But why is that darn sucker moving so slow?

In fact, our poor wooly friend is an animation from Zynga’s game FarmVille. When we worked on this game, 140 million of you were coming back daily to plant and sow. Well, not YOU, of course! But if no one admits to it, where did all those players come from?

When optimizing art assets for Zynga’s Farmville, we noticed that some graphics had huge performance problems. In fact, some of players’ most favorite animals on the farm slowed game play to a crawl. And since Zynga depends on solid game performance to retain its user base, this was serious business.

Our first instinct was to build an art optimizer to simplify the artwork and make it faster via a process called rasterization. And it worked! Great, done—that’s all folks, the end.

BUT we all wanted to know more—why were some artists better than others at making cute little animals that would also show up on your screen with lightning speed? 

There were plenty of other sheep on the farm that looked great and didn’t slow game play, so why was our spaghetti guy causing all sorts of trouble? It turns out it wasn’t the sheep—it was the spaghetti.

Get out of the building moment

Suddenly, the problem the spaghetti bowl caused made so much sense—the artist had individually drawn each noodle in glorious pasta-riffic detail.

Displayed up close on a large-screen monitor, all that detail looked great. But in the game, the entire bowl of spaghetti was just a few pixels across, so the detail was superfluous.

Our optimizer tool allows us to easily spot animation performance problems and improve them.

At that moment, it occurred to us that the problem here wasn’t optimization, it was training.

I immediately scheduled a meeting with the FarmVille studio art director and a few of the artists on the team. They were really surprised at the impact a few extra pen strokes had on game performance.

To find performance bottlenecks in other artwork, we next built the Zynga Analyzer, a tool that integrated directly with the artist’s workflow. With each action in the authoring environment, an artist could see the impact on performance.   

The Analyzer tool helped give everyone a sense of how much drawing “budget” they had for a given character. Some drawing techniques, such as transparency and shading, had more impact on performance than others.

Everyone wanted to install it to find out what they were doing right and which art assets could be improved. Given these constraints, I saw that Zynga artists were incredibly clever at making cute illustrations that also performed well.

The Zynga Analyzer depicts how the artist has consumed their asset “budget.”

What started as an engineering optimization problem had morphed into an artist training program. The tool that exposed total impact on game performance now instilled a bit of self-reinforcing competition within the art team. Artists were spurred on to create illustrations that looked great and performed well. By exposing character performance during the creation process artists felt they had skin in the game, so to speak.

And finally, for illustrations with no fat to cut, the Optimizer tool gave an animation a bit more pep in its step. Altogether the effort once again secured each special animal’s place on the farm, including our little sheep who can now munch spaghetti till the cows come home.

Why Mobile Apps Fail

Apps are Pets

When beginning a mobile, web or other app software project, keep in mind it’s more like adopting a pet than building a product. Software needs continual care, maintenance and feature development. Users expect updates — whether to take advantage of the latest mobile Operating System release, to fix a bug that somehow slipped through Quality Assurance, or simply to add features. Building an app is not a “one-time and you’re done” operation.

Product / Market Fit

Consult the experts, whether  Marc Andressen (co-creator of the first web browser), Steve Blank of Stanford, Paul Graham of Y-Combinator or Sean Ellis of LogMeIn, they all agree: it’s about getting the right product, to the right people. That said, your app’s features should depend heavily on who that app is marketed to. To achieve this takes a lot of  “get out of the building” type thinking promoted by Eric Ries in his book The Lean Startup.  Interacting with your market as soon as possible is paramount.

Agile software development process, user-centric design, and Lean thinking can all help you discover what features to build, but all the theory in the world won’t help unless you learn from your market, measure feedback, and build the features that users desire: you must go through this “build, measure, learn” cycle a few times to get it right.

Build, Measure, Learn: How to achieve product market fit by iteratively achieving small milestones
The Build, Measure, Learn Cycle

Accrue Technical Debt Wisely

You may be initially wowed by a software team that can deliver features fast and furiously — especially when you they look cool, and progress comes swiftly — but in just a few short months, a mobile app project can grind to a halt. Why? At the beginning of a project software developers often build features without building the surrounding infrastructure to support them. It’s like building a glorious bathroom complete with steam shower in a house with no plumbing.

The industry term for this is “technical debt.”  While this form of indebtedness can get you a quick jolt of progress, it can also come back to bite you. For quick experiments, technical debt may be the right option, but for meaningful, high-quality app development, building it right means a robust software architecture, infrastructure to support scaling to a massive audience, and putting in place the security necessary to protect both your users and your investment.

Beware Schedule

There is a reason that, on average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted. In a survey of 600 people closely related to a software project, 78% of respondents reported that the “Business is usually or always out of sync with project requirements.” It is extremely difficult to correctly estimate large software projects. So, smart teams have stopped trying.

But, without an estimate, how will your app hit deadlines and a budget? The secret is once again, in the agile process: you can correctly estimate software deliverables over the short term. The agile software development process promotes short “sprints” and we suggest a one-week time period.  This way, your team releases a fully-functional and complete product every week! And since you are learning from your users, what you will do over the next few months, will be in direct response to their usage and feedback. Think of a product initiative as an experiment, where the goal is to learn what a market wants, and deliver it.

Beware Users

Asking your users directly what they want (or don’t want) is a pitfall to avoid.  You are the innovator, and you understand the possibilities for future directions of your product better than anyone, including your users, so asking them is asking for trouble. Instead, simply observe them. Focus groups are notorious not only for their expense, but also their “false sense of science.” Studying users behind a one-way mirror may have worked well in that Mad Men episode, but for software, the “Starbucks Method” is about a million times cheaper, and much more insightful. Get into the cafe, hand out some 20’s, 0r buy people lunch in exchange for watching them use your product.

There is no harm in providing a few in-app survey questions.  Take a look at Qualaroo for how to do this well. For building community, GetSatisfaction is a good bet. Bottom line: Target your questions to the specific user experience, not the overall product.

Beware Apple

As the recent “Downpocalypse” of the Apple Developer Center demonstrated (no new iPhone apps could be created for over a week!) hitching your app to a single horse is a dangerous move. Though an initial iOS app release makes sense in many cases, building with cross-platform mobile technologies like HTML5Adobe PhonegapLudei or Unity, allows your app to diversify its bets; placing expensive native features only where they’re needed.  This way you can release on iOS, Android, the Web, even PC, Mac and gaming consoles.

Choose the Right Team

Select a development team that will maximize your budget and give the best value.  Sometimes the cheapest guy on Craig’s List, Upwork, or RentACoder is the right way to go — such as with one-off experiments or when a quick and dirty initial draft of an app on a shoestring budget is expected to be tossed and re-written from scratch — but for most projects, it makes sense to proceed with a team that can provide end-to-end services, engage with your users, and help guide you strategically to that nirvana of mass adoption.

IOT Wearables, Beyond Steps and Sleep

There are so many things Machine Learning applies to. When Terry Gross spends an hour talking about artificial intelligence on Fresh Air, you know robots and machine learning are top of mind in the public psyche. In her interview with John Markoff, author of, “Machines of Loving Grace,” he cites a broad definition of AI: “A robot can be … a machine that can walk around, or it can be software that is a personal assistant, something like Siri or Cortana or Google Now.”

Modern artificial intelligence and machine learning techniques apply so broadly, they will touch every aspect of our everyday lives (and some already are). At Concrete Interactive, we have chosen to focus on human motion learning—to capture, characterize, and make recommendations on how to improve any movement a human can perform. And just think of how many movements a person can do!

Owing to over a decade of experience with sensors, data acquisition, and control systems, our machine learning techniques are specialized to be able to detect nuanced motion, to see through the noisy signals that sensors produce, and to identify, count and judge accuracy on thousands of different motions. Weightlifting form, the beautifully aligned yoga pose, the perfect golf swing, medical grade fall prediction are all examples of how machine learning can help people perform better, improve faster and reduce injury.

Beyond Steps & Sleep

What sensors can we use? The accelerometers that are already in the smartphone in your pocket or purse to start!  And the wearable on your wrist, the one clipped to your shoe, and the one coming that none of us even know about.

Capture, characterize and improve human motion
Our mission is to capture, characterize and improve human motion.

 

Our platform is designed to integrate with many wearables.

Think of Photoshop, where you can edit images captured with many different cameras, at many different resolutions, in many formats. Similarly, wearables may have different numbers of sensors, they may be worn on different parts of the body, but they are all measuring the same motions we wish to track.

Garmin Vivosmart Fitness Tracker
Garmin Vivosmart Fitness Tracker

The Unseen Computational Force Within All Things

“Even rocks compute,” a friend once remarked. I won’t comment on either of our sobriety, but at the time I just shot him a knowing look and nodded contemplatively. But what did he mean?

Maybe he meant that the reflections and refractions off rocks, or even better, crystals, perform a sort of mapping. The light waves come in from one direction, then reflect, refract, scatter, and project on to surrounding surfaces. Sometimes these light projections are beautiful, regular, even useful.

Light computation performs a non-linear optical transfer function.
Light reflections perform a computational non-linear optical transfer function.

Light, being a vibration of electromagnetic radiation in the optical part of the spectrum, has a higher frequency, than sound waves, but like light, sound is a vibrational energy that imprints change in the resonance of objects around it. Taken this way, the quandary of the tree falling in the forest resolves in the affirmative, because someone is always there to witness it, namely the tree and forest itself.

The field of archeoacoustics explores the innate acoustical properties of artifacts via audio analysis—a study of the essential vibrational qualities of artifacts and environments. In examination of a 1969 claim by Richard G. Woodbridge III, the Mythbusters report it is in fact not possible to resurrect ancient roman voices from the grooves carved by their tools as they spun around a potter’s wheel 6,500 years ago. Maybe not yet anyway.

The vibrations from our voices to our footsteps, are like tiny tremors impacting the matter around us. And whether this matter is capable of recording them may have more to do with our playback technology than the indisputable fact that what we do influences the things around us.

Can a potter’s tool inadvertently record sounds like carving grooves into a vinyl record?

As early as circa 1902, mathematician Charles Sanders Peirce wrote, “Give science only a hundred more centuries of increase in geometrical progression, and she may be expected to find that the sound waves of Aristotle’s voice have somehow recorded themselves.”

A rock, a tree, the earth—animals have used these as computational devices by echolocation long before humans evolved. Zoologists Roger Payne and Douglas Webb calculated that before ship traffic noise permeated the oceans, tones emitted by fin whales could have traveled as far as four thousand miles and still be heard against the normal background noise of the sea. Whales, bats, dolphins, and recently even some blind humans use echolocation to “see” objects by listening to reflections of sounds they themselves emit. The computation is performed by the reflecting objects, transforming the sound energy as it was emitted, by shifting frequency and waveform to imprint this energy with the characteristics of the objects that have influenced it such as distance, size, hardness, maybe even “tastiness”?

Illustration of dolphin finding food by echolocation.
Illustration of dolphin finding food by echolocation.

 

Is there an essential vibrational signature to all things that can be elucidated through computation? Yes, there is. And the next part of this series will introduce the modern machine learning techniques and sensor technologies being employed to further illuminate the useful (and perhaps mystical) properties in everything around us.

Brett Bond is President of Concrete Interactive, a software development and machine learning firm based in San Francisco and Santa Monica. When not writing software, Brett enjoys practicing yoga, preparing the nursery for his soon-to-arrive daughter, and building large-scale fire displays.

 

Nicole Kidman’s Prosthesis Fools Better-Than-Human Face Recognition Algorithm

Last month in his keynote talk at the NVidia GPU Technology Conference, Andrew Ng (Baidu Chief Scientist, Stanford Professor, Google Brain team founder) described face comparison technology he and his team at Baidu developed. If you haven’t taken his Coursera machine learning course, that’s ok, this post doesn’t assume technical knowledge about machine learning.

The challenge is to compare two pictures containing faces and decide whether they are pictures of the same person or two different people. In the experiment, 6000 pairs of images were examined, both by humans and by algorithms from various teams around the world. Three teams, including Baidu, Google and the Chinese University of Hong Kong achieved better-than-human recognition performance. The Baidu team made just 9 errors out of the 6000 examples.

Here is a slide from Professor Ng’s talk with images of the ones they got wrong:

Face Recognition

You may notice that the top-left image pair is of movie actress Nicole Kidman, which the Baidu system incorrectly classified as two different people. What may be less obvious, is that the second image of Ms. Kidman was taken from her film, “The Hours” in which she is wearing a prosthetic nose.

Nicole Kidman in "The Hours"
Nicole Kidman in “The Hours”

Here are the overall results from implementations done by teams throughout the world. The dashed line indicates human performance at the same task. People typically think of recognizing faces as a very innate human task, but once again, we can be surprised that machine learning algorithms are now capable of equalling or outperforming humans.

More Human than Human

In his talk, Professor Ng credits two major factors as contributing to the vast improvements in machine learning technology over the past five years. As an analogy, think of launching a rocket into orbit: we need 1) both giant rocket engines and 2) lots of rocket fuel.

The rocket engine in his scenario is the incredible computational performance improvements brought to us by GPU technology. The fuel then, is access to huge amounts of data, coming to us from the prevalence of internet connected sensors, online services, and our society’s march toward digitization.

To perform this astounding face comparison judgment, algorithms are trained on facial data. The flow chart in the image above shows that if the algorithm is not performing well on the training data, we need a more powerful rocket engine.

High Performance Computing
The Talon 2.0 high performance computing system

 

The second part of the flowchart above illustrates that if the algorithm is learning the training data quite well, but performing poorly when presented with new examples, then perhaps more “rocket fuel” is required, so gathering more training data would be the logical approach to improve the system.

Professor Ng compares the benefits of high performance computing with cloud computing and states that better training performance may be achievable say with 32 GPUs in a co-located rack than by a larger number of servers running in the cloud. His reasoning is that communication latency, server downtime and other failures are more prevalent in cloud-based systems because they are spread out across more machines with more network connections.

Machine learning has been demonstrated to be good at many things. The recent improvements are not limited to face comparisons, in fact in this same talk improvements to Baidu’s speech recognition system were shown to perform well in noisy environments.

 

 

 

Health IT 2015 Summary

Secure Texting

Pager Explosion

Now that the 2015 HIMSS conference in Chicago has wrapped up, I will try to summarize the trends I observed and how Concrete Interactive fits in. It is clear that secure text messaging is a much-needed feature in healthcare. There are at least 2 established companies vehemently pursuing it: TigerText  Imprivata (via their Coretext feature), and several startups presenting at HIMSS: DiagnotesMyCareText, Cotap.

As we know TigerText just closed a $21M VC round. They claim to have 300 enterprise customers mostly in healthcare, including 4 of the largest for-profit hospital chains.

What isn’t clear to me is whether secure messaging is a separate app, or really a feature to be used with EHR (Electronic Health Record) apps that health companies already have. So for example, Imprivata’s CoreText is really positioned more as a feature of their larger system.

However, as a separate app, secure texting following the BYOD (bring your own device) (yes, this is literally the way they talk about it), is a very attractive feature that many people want, and could provide a solid scenario for deeper involvement or integration at a custom development level.

IoT Health

Medical Devices

Another clear area of expansion is in medical device connectivity. For example, Qualcomm Life bought Healthy Circles, a deal supposedly in the $375M range. It is an iPhone app for continuous care. The patient goes home and plugs in a local bluetooth/3g router into a wall outlet. All the continuous care devices (Class 2 FDA approved medical devices) such as blood pressure monitoring, step counting, pulse, temp, glucose monitoring, even home ventilators and other Class 3 (high risk) devices.

The physician gets a portal. The patient can view and augment the data on the iPhone, though the app isn’t even required. This pattern is repeated over and over by other companies: device, connectivity, app, cloud-based portal.

Machine Learning

Eye Code

The industry is only just awakening to the fact that data science will play a big role. Channels of information such as medical devices and apps are beginning to provide the big data they will use. I did make a nice connection at Wolters Kluwer. They are already doing rule-based processing to de-dupe health data. So if a doctor writes COD, they expand that to codeine. But they want to improve their systems via natural language processing (NLP).

I also met with Piers Nash from the University of Chicago at a Genomics SIG. He’s working with NCI and already has 6 Petabytes (PB) of genomic data from >10,000 patients online and available to the public (after a straightforward application process). He’s looking to host algorithms next and run compute cycles from virtual machines (PAAS type like AWS). One basic problem they are trying to improve is referred to as Single Nucleotide Variation calling (SNV calling). The problem is that each person’s DNA is slightly different, because we are different people. The trick is to identify which nucleotide (DNA letters) are different because of normal genetic variation between people, vs. mutations that cause cancer. One interesting aspect of this problem is that as algorithms improve, past recommendations may become invalid. And there may be a liability aspect at work. Samsung genomics was also in attendance at this meeting. They are launching an initiative to sequence tumors and make recommendations, but it’s similar to others already out there, such as Paradigm.

Also at the genomics meeting was Michael Hultner, the Chief Scientist for Lockheed Martin’s health and life sciences division. They are bidding as are many others for the UK’s 100,000 genomes project. He says their expertise lies in the integration of many technologies and thinks they are well positioned in the health space (not just outer space). So it’s fascinating to see the kinds of companies entering or expanding in this market.

Big Picture Strategy

The healthcare IT space is rapidly expanding as healthcare laws such as Meaningful Use Stage II come into effect increase incentives to leverage advances in the technology. There are many land grabs playing out. Any space worth entering will have competition, but based on my assessment of the overall quality in the space, I believe Concrete Interactive is well positioned to innovate, and stand up great apps against much larger players than ourselves.

HIMSS is on FHIR

Bed babes tout their wares
Bed, not booth babes

This year’s Healthcare Information and Management Systems Society conference in Chicago is a veritable candy store of high-tech healthcare. Yes the smart hospital beds and baby monitoring bracelets are fascinating. But perhaps the highest impact, most impressive technology on offer is what you can’t see—the software. Though it has about as much shazam as a bed pan, the coming health communication infrastructure known as HL7 FHIR (pronounced like “fire”) will allow access to the coveted Electronic Health Record (EHR) via many new applications and devices.

An easy to read diagram
An easy to read diagram

Also very impressive and a bit more visible were the beautiful mobile workflow apps like Nextgen’s “Go for iPad.” What I like about this electronic health record and dictation recording tool is that it does not do everything. The heavy lifting of setting up records is done on the desktop (templating in healthcare parlance), and on-the-go actions such as dictation and prescription refills, can be executed in short order on the iPad.

NextGen Go
NextGen Go

I also learned that Greenway, a software provider of Practice Management (PM) and EHR tools, has an app marketplace (think iTunes). Topping their offering is Phreesia, a check-in app for iPad can replace all that form filling in the doctor’s office with a few taps of a touchscreen.

The Internet of Things (IoT) was also present, from Tyco’s tracking bracelets, for babies and elders, to decibel logging sensors that monitor noise levels. Quietyme, a HealthBox and Gener8tor accelerator graduate, establishes a mesh network of small volume monitors in each hospital room, the corridor, nurses station, etc. They perform some fancy data analytics (in partnership with Miosoft and Zero Locus). CEO John Bialk says that by comparing noise levels in patient rooms with patient surveys, they can document and predict which noisy areas are having a negative impact on healing. And from Ascom, voice over internet protocol (VoIP) portable devices are like little cordless phones that nurses can use on the local area network (LAN). Their Android device even supports internet instant messaging.

Mesh networking decibel monitor
Mesh networking decibel monitor

Thank you to all those who visited with Concrete Interactive, and those who described their wonderful products, software, services and innovation.

Chris Isham from Sidus BioData, Chris Andreski from Ascom, Suzy Fulton from Greenway, Bernard Echiverri from Corepoint, Piers Nash from University of Chicago, Ben Bush from Orchard Software, Mark Lynch from Tyco Security Products, Huey Zoroufy from Quietyme, Matt Ward from Imprivata, Stevie Bahu from Modis Health IT, Michael Hultner from Lockheed Martin, Sungsoo Kang from Samsung.

 

Amazon AWS at HIMSS 2015

Concrete Interactive is available for meetings at HIMSS 2015, the healthcare IT conference in Chicago this April 12-16.

And I know you’ll be almost as excited to learn that for the first time this year Amazon will be making a full-fledged appearance at HIMSS. What’s even more remarkable is that some of the leaders of the AWS HIPAA compliance team, such as Chris Crosbie HIPAA Solutions Architect, Jessie Beegle their Business Development Manager for the Healthcare Industry, and Kenzie Kepper member of the AWS Healthcare Marketing Team will be present and accepting meetings.

You can request a meeting if interested in learning more about hosting HIPAA applications on AWS. Here’s the signup link: http://www.aws.amazon.com/events/aws-himss-events.

In my experience with the Amazon Popup Loft in San Francisco, the AWS team is very giving of their time and expertise. These aren’t your typical Apple “Genius” types who fall into a prescribed script about fixing your iPhone. The solution architects and technical team members who are available at the Popup Loft are the actual people with inside technical knowledge of the AWS service, and they have been happy to dive into our application details.

So, how does one implement a HIPAA compliant software application on Amazon Web Service? Back when Concrete Interactive built our first HIPAA app in 2012, assigning responsibility across the network infrastructure was quite a challenge. Nowadays, Amazon has drawn a bright line at the hypervisor, the piece of network virtualization software that manages the particular application’s server. Their shared responsibility model ensures from the hypervisor outward, throughout the rest of the AWS network, it is Amazon’s responsibility to secure PHI.

AWS shares responsibility for PHI with Concrete Interactive
AWS shares responsibility for PHI with BAA signatories like Concrete Interactive

 

AWS specifically supports HIPAA compliant infrastructure through six of their services today: Amazon EC2, Amazon EBS, Amazon S3, Amazon Redshift, Amazon Glacier, and Amazon Elastic Load Balancer.

Specifically on EC2, you must use a dedicated instance. This comes with a higher monthly fee, but it’s peanuts compared with building your own compliant datacenter.

According to Amazon’s HIPAA compliance video, over 600 companies have signed their Business Associates Agreement (including us!) This agreement allows our HIPAA compliant apps to be validated, and shows where PHI responsibility lies, depending on which side of the hypervisor line it is used, stored, or transferred.

If you are interested in meeting with Concrete Interactive at HIMSS 2015, please drop us a line. In partnership with Amazon AWS, and FDA Compliance Advisor David Nettleton, we hope to shed light on any of your HIPAA, healthcare, web or mobile app development questions.

Apple’s ResearchKit Puts Clinical Trials in Your Pocket

Building HIPAA compliant software has never been easy. Modern apps served from the cloud, and enabled for mobile devices presents even greater challenges. But imagine the potential for medical research, given the hundreds of millions of smartphones deployed globally, each equipped with dozens of sensors.

Last year when Apple introduced HealthKit for developers, the iPhone leapt suddenly into the ranks of integrated health tracker, along the lines of Fitbit and Jawbone activity trackers. But the iPhone has one major advantage over most other health tracking devices: built-in internet connectivity.

Whereas with Fitbit, Jawbone, Nike Plus, wifi-enabled scales, blood pressure monitors, and similar devices, users need to complete a multi-step setup process, but the iPhone is ready to send useful data about number of steps walked or run, flights climbed, and many other sensor events straight to the cloud.

The FitBit requires additional software installation.
The FitBit Ultra requires additional software installation.

 

By providing the iOS Health app for free as part of iOS 8, Apple has given consumers a powerful new toolkit for tracking health data. The only problem is, this data is unavailable to researchers. There has been no way for researchers, doctors, hospitals or health administrators to access health data collected via HealthKit, even if a patient were willing to give consent. Until now…

The iOS Health App
The iOS Health App

ResearchKit, officially launching next month, provides a simplified, streamlined user interface framework for health apps to perform HIPAA-compliant clinical trial consent. According to Apple’s ResearchKit website, “With a user’s consent, ResearchKit can seamlessly tap into the pool of useful data generated by HealthKit — like daily step counts, calorie use, and heart rates — making it accessible to medical researchers.”

Apple has partnered with some impressive names in medical research, listing these on its website: The American Heart Association, Army of Women, Avon Foundation for Women, BreastCancer.org, Dana-Farber Cancer Institute, Massachusetts General Hospital, Michael J Fox Foundation for Parkinson’s Research, Icahn School of Medicine at Mount Sinai, Penn Medicine, University of Oxford, University of Rochester Medical School, Sage Bionetworks, Stanford Medicine, Susan G Komen, UCLA Jonsson Comprehensive Cancer Center, Weill Cornell Medical College and Xuanwu Hospital Capital Medical University.

So what can ResearchKit do for the researcher? The ResearchKit developer framework is divided into three primary modules: Surveys, Informed Consent, and Active Tasks. A touch-based signature panel allows an app user to perform informed consent right on their mobile device. The survey module provides a builder tool to specify types of questions and answers akin to SurveyMonkey, Google Forms or Wufoo, etc. The Active Tasks module is where active data collection begins.


ResearchKit Signature Panel and Activity Completion

With an active task, ResearchKit allows the user to complete a physical task while the iPhone’s sensors perform active data collection. This data can then be securely transmitted to the cloud for inclusion in the study. For example, Stanford’s MyHeart Counts app has already had tens of thousands of enrollees in just the short time since its launch in March, a feat unequaled by any clinical trial.

This is just the beginning. Data collection will not be limited to the sensors native to the iPhone. External devices, communicating over bluetooth for example, can provide more data such as heart rate, temperature, and weight.

According to VentureBeat, “Google also announced last year that it is developing a contact lens that can measure glucose levels in a person’s tears and transmit these data via an antenna thinner than a human hair.” The New York Times also reports this device is being developed by Google in partnership with Novartis.

Glucose Monitoring Smart Contact Lens
Glucose Monitoring Smart Contact Lens