There are so many things Machine Learning applies to. When Terry Gross spends an hour talking about artificial intelligence on Fresh Air, you know robots and machine learning are top of mind in the public psyche. In her interview with John Markoff, author of, “Machines of Loving Grace,” he cites a broad definition of AI: “A robot can be … a machine that can walk around, or it can be software that is a personal assistant, something like Siri or Cortana or Google Now.”
Modern artificial intelligence and machine learning techniques apply so broadly, they will touch every aspect of our everyday lives (and some already are). At Concrete Interactive, we have chosen to focus on human motion learning—to capture, characterize, and make recommendations on how to improve any movement a human can perform. And just think of how many movements a person can do!
Owing to over a decade of experience with sensors, data acquisition, and control systems, our machine learning techniques are specialized to be able to detect nuanced motion, to see through the noisy signals that sensors produce, and to identify, count and judge accuracy on thousands of different motions. Weightlifting form, the beautifully aligned yoga pose, the perfect golf swing, medical grade fall prediction are all examples of how machine learning can help people perform better, improve faster and reduce injury.
Beyond Steps & Sleep
What sensors can we use? The accelerometers that are already in the smartphone in your pocket or purse to start! And the wearable on your wrist, the one clipped to your shoe, and the one coming that none of us even know about.
Our platform is designed to integrate with many wearables.
Think of Photoshop, where you can edit images captured with many different cameras, at many different resolutions, in many formats. Similarly, wearables may have different numbers of sensors, they may be worn on different parts of the body, but they are all measuring the same motions we wish to track.
“Even rocks compute,” a friend once remarked. I won’t comment on either of our sobriety, but at the time I just shot him a knowing look and nodded contemplatively. But what did he mean?
Maybe he meant that the reflections and refractions off rocks, or even better, crystals, perform a sort of mapping. The light waves come in from one direction, then reflect, refract, scatter, and project on to surrounding surfaces. Sometimes these light projections are beautiful, regular, even useful.
Light, being a vibration of electromagnetic radiation in the optical part of the spectrum, has a higher frequency, than sound waves, but like light, sound is a vibrational energy that imprints change in the resonance of objects around it. Taken this way, the quandary of the tree falling in the forest resolves in the affirmative, because someone is always there to witness it, namely the tree and forest itself.
The field of archeoacoustics explores the innate acoustical properties of artifacts via audio analysis—a study of the essential vibrational qualities of artifacts and environments. In examination of a 1969 claim by Richard G. Woodbridge III, the Mythbusters report it is in fact not possible to resurrect ancient roman voices from the grooves carved by their tools as they spun around a potter’s wheel 6,500 years ago. Maybe not yet anyway.
The vibrations from our voices to our footsteps, are like tiny tremors impacting the matter around us. And whether this matter is capable of recording them may have more to do with our playback technology than the indisputable fact that what we do influences the things around us.
As early as circa 1902, mathematician Charles Sanders Peirce wrote, “Give science only a hundred more centuries of increase in geometrical progression, and she may be expected to find that the sound waves of Aristotle’s voice have somehow recorded themselves.”
A rock, a tree, the earth—animals have used these as computational devices by echolocation long before humans evolved. Zoologists Roger Payne and Douglas Webb calculated that before ship traffic noise permeated the oceans, tones emitted by fin whales could have traveled as far as four thousand miles and still be heard against the normal background noise of the sea. Whales, bats, dolphins, and recently even some blind humans use echolocation to “see” objects by listening to reflections of sounds they themselves emit. The computation is performed by the reflecting objects, transforming the sound energy as it was emitted, by shifting frequency and waveform to imprint this energy with the characteristics of the objects that have influenced it such as distance, size, hardness, maybe even “tastiness”?
Is there an essential vibrational signature to all things that can be elucidated through computation? Yes, there is. And the next part of this series will introduce the modern machine learning techniques and sensor technologies being employed to further illuminate the useful (and perhaps mystical) properties in everything around us.
Brett Bond is President of Concrete Interactive, a software development and machine learning firm based in San Francisco and Santa Monica. When not writing software, Brett enjoys practicing yoga, preparing the nursery for his soon-to-arrive daughter, and building large-scale fire displays.
The challenge is to compare two pictures containing faces and decide whether they are pictures of the same person or two different people. In the experiment, 6000 pairs of images were examined, both by humans and by algorithms from various teams around the world. Three teams, including Baidu, Google and the Chinese University of Hong Kong achieved better-than-human recognition performance. The Baidu team made just 9 errors out of the 6000 examples.
Here is a slide from Professor Ng’s talk with images of the ones they got wrong:
You may notice that the top-left image pair is of movie actress Nicole Kidman, which the Baidu system incorrectly classified as two different people. What may be less obvious, is that the second image of Ms. Kidman was taken from her film, “The Hours” in which she is wearing a prosthetic nose.
Here are the overall results from implementations done by teams throughout the world. The dashed line indicates human performance at the same task. People typically think of recognizing faces as a very innate human task, but once again, we can be surprised that machine learning algorithms are now capable of equalling or outperforming humans.
In his talk, Professor Ng credits two major factors as contributing to the vast improvements in machine learning technology over the past five years. As an analogy, think of launching a rocket into orbit: we need 1) both giant rocket engines and 2) lots of rocket fuel.
The rocket engine in his scenario is the incredible computational performance improvements brought to us by GPU technology. The fuel then, is access to huge amounts of data, coming to us from the prevalence of internet connected sensors, online services, and our society’s march toward digitization.
To perform this astounding face comparison judgment, algorithms are trained on facial data. The flow chart in the image above shows that if the algorithm is not performing well on the training data, we need a more powerful rocket engine.
The second part of the flowchart above illustrates that if the algorithm is learning the training data quite well, but performing poorly when presented with new examples, then perhaps more “rocket fuel” is required, so gathering more training data would be the logical approach to improve the system.
Professor Ng compares the benefits of high performance computing with cloud computing and states that better training performance may be achievable say with 32 GPUs in a co-located rack than by a larger number of servers running in the cloud. His reasoning is that communication latency, server downtime and other failures are more prevalent in cloud-based systems because they are spread out across more machines with more network connections.
Machine learning has been demonstrated to be good at many things. The recent improvements are not limited to face comparisons, in fact in this same talk improvements to Baidu’s speech recognition system were shown to perform well in noisy environments.
Now that the 2015 HIMSS conference in Chicago has wrapped up, I will try to summarize the trends I observed and how Concrete Interactive fits in. It is clear that secure text messaging is a much-needed feature in healthcare. There are at least 2 established companies vehemently pursuing it: TigerTextImprivata (via their Coretext feature), and several startups presenting at HIMSS: Diagnotes, MyCareText, Cotap.
As we know TigerText just closed a $21M VC round. They claim to have 300 enterprise customers mostly in healthcare, including 4 of the largest for-profit hospital chains.
What isn’t clear to me is whether secure messaging is a separate app, or really a feature to be used with EHR (Electronic Health Record) apps that health companies already have. So for example, Imprivata’s CoreText is really positioned more as a feature of their larger system.
However, as a separate app, secure texting following the BYOD (bring your own device) (yes, this is literally the way they talk about it), is a very attractive feature that many people want, and could provide a solid scenario for deeper involvement or integration at a custom development level.
Another clear area of expansion is in medical device connectivity. For example, Qualcomm Life bought Healthy Circles, a deal supposedly in the $375M range. It is an iPhone app for continuous care. The patient goes home and plugs in a local bluetooth/3g router into a wall outlet. All the continuous care devices (Class 2 FDA approved medical devices) such as blood pressure monitoring, step counting, pulse, temp, glucose monitoring, even home ventilators and other Class 3 (high risk) devices.
The physician gets a portal. The patient can view and augment the data on the iPhone, though the app isn’t even required. This pattern is repeated over and over by other companies: device, connectivity, app, cloud-based portal.
The industry is only just awakening to the fact that data science will play a big role. Channels of information such as medical devices and apps are beginning to provide the big data they will use. I did make a nice connection at Wolters Kluwer. They are already doing rule-based processing to de-dupe health data. So if a doctor writes COD, they expand that to codeine. But they want to improve their systems via natural language processing (NLP).
I also met with Piers Nash from the University of Chicago at a Genomics SIG. He’s working with NCI and already has 6 Petabytes (PB) of genomic data from >10,000 patients online and available to the public (after a straightforward application process). He’s looking to host algorithms next and run compute cycles from virtual machines (PAAS type like AWS). One basic problem they are trying to improve is referred to as Single Nucleotide Variation calling (SNV calling). The problem is that each person’s DNA is slightly different, because we are different people. The trick is to identify which nucleotide (DNA letters) are different because of normal genetic variation between people, vs. mutations that cause cancer. One interesting aspect of this problem is that as algorithms improve, past recommendations may become invalid. And there may be a liability aspect at work. Samsung genomics was also in attendance at this meeting. They are launching an initiative to sequence tumors and make recommendations, but it’s similar to others already out there, such as Paradigm.
Also at the genomics meeting was Michael Hultner, the Chief Scientist for Lockheed Martin’s health and life sciences division. They are bidding as are many others for the UK’s 100,000 genomes project. He says their expertise lies in the integration of many technologies and thinks they are well positioned in the health space (not just outer space). So it’s fascinating to see the kinds of companies entering or expanding in this market.
Big Picture Strategy
The healthcare IT space is rapidly expanding as healthcare laws such as Meaningful Use Stage II come into effect increase incentives to leverage advances in the technology. There are many land grabs playing out. Any space worth entering will have competition, but based on my assessment of the overall quality in the space, I believe Concrete Interactive is well positioned to innovate, and stand up great apps against much larger players than ourselves.
This year’s Healthcare Information and Management Systems Society conference in Chicago is a veritable candy store of high-tech healthcare. Yes the smart hospital beds and baby monitoring bracelets are fascinating. But perhaps the highest impact, most impressive technology on offer is what you can’t see—the software. Though it has about as much shazam as a bed pan, the coming health communication infrastructure known as HL7 FHIR (pronounced like “fire”) will allow access to the coveted Electronic Health Record (EHR) via many new applications and devices.
Also very impressive and a bit more visible were the beautiful mobile workflow apps like Nextgen’s “Go for iPad.” What I like about this electronic health record and dictation recording tool is that it does not do everything. The heavy lifting of setting up records is done on the desktop (templating in healthcare parlance), and on-the-go actions such as dictation and prescription refills, can be executed in short order on the iPad.
I also learned that Greenway, a software provider of Practice Management (PM) and EHR tools, has an app marketplace (think iTunes). Topping their offering is Phreesia, a check-in app for iPad can replace all that form filling in the doctor’s office with a few taps of a touchscreen.
The Internet of Things (IoT) was also present, from Tyco’s tracking bracelets, for babies and elders, to decibel logging sensors that monitor noise levels. Quietyme, a HealthBox and Gener8tor accelerator graduate, establishes a mesh network of small volume monitors in each hospital room, the corridor, nurses station, etc. They perform some fancy data analytics (in partnership with Miosoft and Zero Locus). CEO John Bialk says that by comparing noise levels in patient rooms with patient surveys, they can document and predict which noisy areas are having a negative impact on healing. And from Ascom, voice over internet protocol (VoIP) portable devices are like little cordless phones that nurses can use on the local area network (LAN). Their Android device even supports internet instant messaging.
Thank you to all those who visited with Concrete Interactive, and those who described their wonderful products, software, services and innovation.
Concrete Interactive is available for meetings at HIMSS 2015, the healthcare IT conference in Chicago this April 12-16.
And I know you’ll be almost as excited to learn that for the first time this year Amazon will be making a full-fledged appearance at HIMSS. What’s even more remarkable is that some of the leaders of the AWS HIPAA compliance team, such as Chris Crosbie HIPAA Solutions Architect, Jessie Beegle their Business Development Manager for the Healthcare Industry, and Kenzie Kepper member of the AWS Healthcare Marketing Team will be present and accepting meetings.
In my experience with the Amazon Popup Loft in San Francisco, the AWS team is very giving of their time and expertise. These aren’t your typical Apple “Genius” types who fall into a prescribed script about fixing your iPhone. The solution architects and technical team members who are available at the Popup Loft are the actual people with inside technical knowledge of the AWS service, and they have been happy to dive into our application details.
So, how does one implement a HIPAA compliant software application on Amazon Web Service? Back when Concrete Interactive built our first HIPAA app in 2012, assigning responsibility across the network infrastructure was quite a challenge. Nowadays, Amazon has drawn a bright line at the hypervisor, the piece of network virtualization software that manages the particular application’s server. Their shared responsibility model ensures from the hypervisor outward, throughout the rest of the AWS network, it is Amazon’s responsibility to secure PHI.
Specifically on EC2, you must use a dedicated instance. This comes with a higher monthly fee, but it’s peanuts compared with building your own compliant datacenter.
According to Amazon’s HIPAA compliance video, over 600 companies have signed their Business Associates Agreement (including us!) This agreement allows our HIPAA compliant apps to be validated, and shows where PHI responsibility lies, depending on which side of the hypervisor line it is used, stored, or transferred.
If you are interested in meeting with Concrete Interactive at HIMSS 2015, please drop us a line. In partnership with Amazon AWS, and FDA Compliance Advisor David Nettleton, we hope to shed light on any of your HIPAA, healthcare, web or mobile app development questions.
Building HIPAA compliant software has never been easy. Modern apps served from the cloud, and enabled for mobile devices presents even greater challenges. But imagine the potential for medical research, given the hundreds of millions of smartphones deployed globally, each equipped with dozens of sensors.
Last year when Apple introduced HealthKit for developers, the iPhone leapt suddenly into the ranks of integrated health tracker, along the lines of Fitbit and Jawbone activity trackers. But the iPhone has one major advantage over most other health tracking devices: built-in internet connectivity.
Whereas with Fitbit, Jawbone, Nike Plus, wifi-enabled scales, blood pressure monitors, and similar devices, users need to complete a multi-step setup process, but the iPhone is ready to send useful data about number of steps walked or run, flights climbed, and many other sensor events straight to the cloud.
By providing the iOS Health app for free as part of iOS 8, Apple has given consumers a powerful new toolkit for tracking health data. The only problem is, this data is unavailable to researchers. There has been no way for researchers, doctors, hospitals or health administrators to access health data collected via HealthKit, even if a patient were willing to give consent. Until now…
ResearchKit, officially launching next month, provides a simplified, streamlined user interface framework for health apps to perform HIPAA-compliant clinical trial consent. According to Apple’s ResearchKit website, “With a user’s consent, ResearchKit can seamlessly tap into the pool of useful data generated by HealthKit — like daily step counts, calorie use, and heart rates — making it accessible to medical researchers.”
Apple has partnered with some impressive names in medical research, listing these on its website: The American Heart Association, Army of Women, Avon Foundation for Women, BreastCancer.org, Dana-Farber Cancer Institute, Massachusetts General Hospital, Michael J Fox Foundation for Parkinson’s Research, Icahn School of Medicine at Mount Sinai, Penn Medicine, University of Oxford, University of Rochester Medical School, Sage Bionetworks, Stanford Medicine, Susan G Komen, UCLA Jonsson Comprehensive Cancer Center, Weill Cornell Medical College and Xuanwu Hospital Capital Medical University.
So what can ResearchKit do for the researcher? The ResearchKit developer framework is divided into three primary modules: Surveys, Informed Consent, and Active Tasks. A touch-based signature panel allows an app user to perform informed consent right on their mobile device. The survey module provides a builder tool to specify types of questions and answers akin to SurveyMonkey, Google Forms or Wufoo, etc. The Active Tasks module is where active data collection begins.
With an active task, ResearchKit allows the user to complete a physical task while the iPhone’s sensors perform active data collection. This data can then be securely transmitted to the cloud for inclusion in the study. For example, Stanford’s MyHeart Counts app has already had tens of thousands of enrollees in just the short time since its launch in March, a feat unequaled by any clinical trial.
This is just the beginning. Data collection will not be limited to the sensors native to the iPhone. External devices, communicating over bluetooth for example, can provide more data such as heart rate, temperature, and weight.
According to VentureBeat, “Google also announced last year that it is developing a contact lens that can measure glucose levels in a person’s tears and transmit these data via an antenna thinner than a human hair.” The New York Times also reports this device is being developed by Google in partnership with Novartis.
Machine learning will have intense and amazing impacts on our lives. You may have heard the hype, or the fear mongering. Now let’s take a closer look at what this technology has to offer, and if there is really anything to fear.
First of all machine learning isn’t just one thing, but a broad set of algorithms, tools and techniques combined with advances in computer processing and refined (human) expertise in making decisions based on available data.
There is more data available now than ever before because modern sensor technology has rapidly decreased in price, size and power consumption (witness everything from the iPhone to your car to your washing machine). Revolutionary developments of the past two decades in 3D graphics processors called Graphics Processing Units (or GPUs) make video games and movies more realistic. Interestingly the same mathematics that these GPUs accelerate are also applicable to machine learning (matrix operations).
Finally, today’s learning algorithms including deep neural networks and support vector machines are more advanced than ever and easier to use than ever. Together, the algorithms, the GPUs and the data, allow a kind of pattern recognition and inferencing we call machine learning. Another broad term for the use of this technology is “data science.” In short, machine learning is a new tool for humanity to gain insight into patterns that exist everywhere around us. So what is it good for?
The human brain is a master of pattern recognition. Imagine how complex the tiny air movements we call sound must be, and yet speaking and understanding our native tongue is remarkably simple. How could a machine learn such a thing? Yet today, tools like Siri, Google Voice, and Nuance can convert speech to text. Translation and understanding are still out of reach.
The power of machine learning lies in algorithmic ability to find patterns in data, in much the same way that we find patterns in images we see, sounds we hear, behaviors we notice. These tools will touch every area of our lives, much the way the invention of the microscope gave us new insights that changed our view of the world. Insight. Whether used for good or for ill, machine learning algorithms are tools that provide insight.
Artificial intelligence, robots taking over the world, these are concerns that are quite a few steps away from the kind of data analysis machine learning algorithms provide. Let’s look more deeply at a simple machine learning problem to understand why. It’s a classic: identifying 3 species of the iris flower. There are 3 common species, as pictured below: Iris Versicolor, Iris Virginica, and Iris Setosa.
We can learn, and a machine learning algorithm can also learn to identify these species fairly reliably. We don’t even need their photographs, just a ruler. We measure a few characteristics such as the length of their petals and sepal length (the flower’s enclosure). Voila, we have data! Here is a link to an actual iris data set.
Looking at the images of just a single Iris, it’s fairly easy to see one of these flowers is not like the others (the Setosa). And while the Versicolor and Virginica may look more similar, a quick graph shows as a group that they are different enough to separate as well. And note that the Setosa is even further separated. What is learning? Differentiating like from other. Identifying new examples as similar to what we know. We learn language by separating the sounds we hear into vowels, consonants, phonemes, words, phrases and meanings. We learn the laws of physics (at first) by experimenting with water, blocks, and the ground. We differentiate behaviors that differentiate a nice full water glass from a spill, a stack of blocks from a mess, and a stroll from again, a spill. Differentiation is a kind of learning.
It is just that kind of learning that machine learning algorithms perform. Not thinking, just the ability to interpret the data an algorithm has seen to make predictions on examples the algorithm hasn’t yet seen. Obviously, there’s a lot more to it than that. Stay tuned for more posts where I will argue both that machine learning will be an incredible tool for humanity, and that it won’t lead to a robot president.
When beginning a mobile, web or other app software project, keep in mind it’s more like adopting a pet than building a product. Software needs continual care, maintenance and feature development. Users expect updates — whether to take advantage of the latest mobile Operating System release, to fix a bug that somehow slipped through Quality Assurance, or simply to add features. Building an app is not a “one-time and you’re done” operation.
Product / Market Fit
Consult the experts, whether Marc Andressen (co-creator of the first web browser), Steve Blank of Stanford, Paul Graham of Y-Combinator or Sean Ellis of LogMeIn, they all agree: it’s about getting the right product, to the right people. That said, your app’s features should depend heavily on who that app is marketed to. To achieve this takes a lot of “get out of the building” type thinking promoted by Eric Ries in his book The Lean Startup. Interacting with your market as soon as possible is paramount.
Agile software development process, user-centric design, and Lean thinking can all help you discover what features to build, but all the theory in the world won’t help unless you learn from your market, measure feedback, and build the features that users desire: you must go through this “build, measure, learn” cycle a few times to get it right.
Accrue Technical Debt Wisely
You may be initially wowed by a software team that can deliver features fast and furiously — especially when you they look cool, and progress comes swiftly — but in just a few short months, a mobile app project can grind to a halt. Why? At the beginning of a project software developers often build features without building the surrounding infrastructure to support them. It’s like building a glorious bathroom complete with steam shower in a house with no plumbing.
The industry term for this is “technical debt.” While this form of indebtedness can get you a quick jolt of progress, it can also come back to bite you. For quick experiments, technical debt may be the right option, but for meaningful, high-quality app development, building it right means a robust software architecture, infrastructure to support scaling to a massive audience, and putting in place the security necessary to protect both your users and your investment.
There is a reason that, on average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted. In a survey of 600 people closely related to a software project, 78% of respondents reported that the “Business is usually or always out of sync with project requirements.” It is extremely difficult to correctly estimate large software projects. So, smart teams have stopped trying.
But, without an estimate, how will your app hit deadlines and a budget? The secret is once again, in the agile process: you can correctly estimate software deliverables over the short term. The agile software development process promotes short “sprints” and we suggest a one-week time period. This way, your team releases a fully-functional and complete product every week! And since you are learning from your users, what you will do over the next few months, will be in direct response to their usage and feedback. Think of a product initiative as an experiment, where the goal is to learn what a market wants, and deliver it.
Asking your users directly what they want (or don’t want) is a pitfall to avoid. You are the innovator, and you understand the possibilities for future directions of your product better than anyone, including your users, so asking them is asking for trouble. Instead, simply observe them. Focus groups are notorious not only for their expense, but also their “false sense of science.” Studying users behind a one-way mirror may have worked well in that Mad Men episode, but for software, the “Starbucks Method” is about a million times cheaper, and much more insightful. Get into the cafe, hand out some 20’s, 0r buy people lunch in exchange for watching them use your product.
There is no harm in providing a few in-app survey questions. Take a look at Qualaroo for how to do this well. For building community, GetSatisfaction is a good bet. Bottom line: Target your questions to the specific user experience, not the overall product.
As the recent “Downpocalypse” of the Apple Developer Center demonstrated (no new iPhone apps could be created for over a week!) hitching your app to a single horse is a dangerous move. Though an initial iOS app release makes sense in many cases, building with cross-platform mobile technologies like HTML5, Adobe Phonegap, Ludei or Unity, allows your app to diversify its bets; placing expensive native features only where they’re needed. This way you can release on iOS, Android, the Web, even PC, Mac and gaming consoles.
Choose the Right Team
Select a development team that will maximize your budget and give the best value. Sometimes the cheapest guy on Craig’s List, ODesk, eLance, or RentACoder is the right way to go — such as with one-off experiments or when a quick and dirty initial draft of an app on a shoestring budget is expected to be tossed and re-written from scratch — but for most projects, it makes sense to proceed with a team that can provide end-to-end services, engage with your users, and help guide you strategically to that nirvana of mass adoption.
Look around you and you will see thousands of “things” all within your immediate vicinity. Your keychain. Your desk chair. Your favorite coffee mug filled with Italian Roast coffee. Your dying ficus plant. With today’s technology, there is no reason why these things cannot communicate with you, in-real time.
Your plant should tell you, not only that it needs water, but how much and what position to place it during the day
Your coffee mug should know what kind of roast you want to drink today
You should be able to find your keys at a moment’s notice because you have a bad habit of misplacing them the moment you are about to go somewhere
Your desk chair should automatically adjust itself when it detects you are sitting with poor posture (reminder: stop slouching)
As luck would have it, there are technologies for each one of these things, being built. Right. Now. (See for yourself: Plant | Coffee | Keys | Chair).
The Hardware (R)evolution.
While cliche, the world we are living in is becoming increasingly more connected, more now than ever before. While in research in development only a few years ago, technologies like RFID, NFC, and Zigbee are enabling the next generation of connected devices in a cost and energy efficient way. In fact, consumer goods that weren’t previously connected 18 months ago are now online. Recent examples, include:
Connected lightbulbs allow you to change its hue and control the lighting ambiance of your room via a mobile app
As enabling technologies become cheaper and smaller, companies will be forced to innovate and think about how their offline products can get online.
Getting offline products into the 21st century is only the tip of the iceberg. Enhancing these products with connected technologies has to transform the product experience, be personal, and have utility. The bar for product experiences is so high, not executing against these objectives will result in a gimmicky, failure of an experience.
For example: A shoe company may want to create a running shoe with a GPS Dot. These “online shoes” should not only track where (and how long) the user was running, but it should provide actionable insights based on what the shoe company already knows about you: recommend running trails based on your running style and preferences, alert you when your friends are close by, give you a discount if you walk by a their store, and let you know how hard to run based on your body fat and weight goals.
Utility vs. Privacy
Privacy generally is a topic of concern when more devices become online and “all knowing.” As we’ve seen from the internet and media today, and in light of recent NSA privacy concerns, users are willing to give up certain liberties to connect with friends (Facebook), share their thoughts (Twitter), utilize free email (Google), or make free international calls over the internet (Microsoft/Skype). We believe its important for companies who are contemplating an online product strategy understand these implications and balance the utility an online product with the user’s privacy and the company’s ethics/values.