IOT Wearables, Beyond Steps and Sleep

There are so many things Machine Learning applies to. When Terry Gross spends an hour talking about artificial intelligence on Fresh Air, you know robots and machine learning are top of mind in the public psyche. In her interview with John Markoff, author of, “Machines of Loving Grace,” he cites a broad definition of AI: “A robot can be … a machine that can walk around, or it can be software that is a personal assistant, something like Siri or Cortana or Google Now.”

Modern artificial intelligence and machine learning techniques apply so broadly, they will touch every aspect of our everyday lives (and some already are). At Concrete Interactive, we have chosen to focus on human motion learning—to capture, characterize, and make recommendations on how to improve any movement a human can perform. And just think of how many movements a person can do!

Owing to over a decade of experience with sensors, data acquisition, and control systems, our machine learning techniques are specialized to be able to detect nuanced motion, to see through the noisy signals that sensors produce, and to identify, count and judge accuracy on thousands of different motions. Weightlifting form, the beautifully aligned yoga pose, the perfect golf swing, medical grade fall prediction are all examples of how machine learning can help people perform better, improve faster and reduce injury.

Beyond Steps & Sleep

What sensors can we use? The accelerometers that are already in the smartphone in your pocket or purse to start!  And the wearable on your wrist, the one clipped to your shoe, and the one coming that none of us even know about.

Capture, characterize and improve human motion
Our mission is to capture, characterize and improve human motion.


Our platform is designed to integrate with many wearables.

Think of Photoshop, where you can edit images captured with many different cameras, at many different resolutions, in many formats. Similarly, wearables may have different numbers of sensors, they may be worn on different parts of the body, but they are all measuring the same motions we wish to track.

Garmin Vivosmart Fitness Tracker
Garmin Vivosmart Fitness Tracker

The Unseen Computational Force Within All Things

“Even rocks compute,” a friend once remarked. I won’t comment on either of our sobriety, but at the time I just shot him a knowing look and nodded contemplatively. But what did he mean?

Maybe he meant that the reflections and refractions off rocks, or even better, crystals, perform a sort of mapping. The light waves come in from one direction, then reflect, refract, scatter, and project on to surrounding surfaces. Sometimes these light projections are beautiful, regular, even useful.

Light computation performs a non-linear optical transfer function.
Light reflections perform a computational non-linear optical transfer function.

Light, being a vibration of electromagnetic radiation in the optical part of the spectrum, has a higher frequency, than sound waves, but like light, sound is a vibrational energy that imprints change in the resonance of objects around it. Taken this way, the quandary of the tree falling in the forest resolves in the affirmative, because someone is always there to witness it, namely the tree and forest itself.

The field of archeoacoustics explores the innate acoustical properties of artifacts via audio analysis—a study of the essential vibrational qualities of artifacts and environments. In examination of a 1969 claim by Richard G. Woodbridge III, the Mythbusters report it is in fact not possible to resurrect ancient roman voices from the grooves carved by their tools as they spun around a potter’s wheel 6,500 years ago. Maybe not yet anyway.

The vibrations from our voices to our footsteps, are like tiny tremors impacting the matter around us. And whether this matter is capable of recording them may have more to do with our playback technology than the indisputable fact that what we do influences the things around us.

Can a potter’s tool inadvertently record sounds like carving grooves into a vinyl record?

As early as circa 1902, mathematician Charles Sanders Peirce wrote, “Give science only a hundred more centuries of increase in geometrical progression, and she may be expected to find that the sound waves of Aristotle’s voice have somehow recorded themselves.”

A rock, a tree, the earth—animals have used these as computational devices by echolocation long before humans evolved. Zoologists Roger Payne and Douglas Webb calculated that before ship traffic noise permeated the oceans, tones emitted by fin whales could have traveled as far as four thousand miles and still be heard against the normal background noise of the sea. Whales, bats, dolphins, and recently even some blind humans use echolocation to “see” objects by listening to reflections of sounds they themselves emit. The computation is performed by the reflecting objects, transforming the sound energy as it was emitted, by shifting frequency and waveform to imprint this energy with the characteristics of the objects that have influenced it such as distance, size, hardness, maybe even “tastiness”?

Illustration of dolphin finding food by echolocation.
Illustration of dolphin finding food by echolocation.


Is there an essential vibrational signature to all things that can be elucidated through computation? Yes, there is. And the next part of this series will introduce the modern machine learning techniques and sensor technologies being employed to further illuminate the useful (and perhaps mystical) properties in everything around us.

Brett Bond is President of Concrete Interactive, a software development and machine learning firm based in San Francisco and Santa Monica. When not writing software, Brett enjoys practicing yoga, preparing the nursery for his soon-to-arrive daughter, and building large-scale fire displays.


Nicole Kidman’s Prosthesis Fools Better-Than-Human Face Recognition Algorithm

Last month in his keynote talk at the NVidia GPU Technology Conference, Andrew Ng (Baidu Chief Scientist, Stanford Professor, Google Brain team founder) described face comparison technology he and his team at Baidu developed. If you haven’t taken his Coursera machine learning course, that’s ok, this post doesn’t assume technical knowledge about machine learning.

The challenge is to compare two pictures containing faces and decide whether they are pictures of the same person or two different people. In the experiment, 6000 pairs of images were examined, both by humans and by algorithms from various teams around the world. Three teams, including Baidu, Google and the Chinese University of Hong Kong achieved better-than-human recognition performance. The Baidu team made just 9 errors out of the 6000 examples.

Here is a slide from Professor Ng’s talk with images of the ones they got wrong:

Face Recognition

You may notice that the top-left image pair is of movie actress Nicole Kidman, which the Baidu system incorrectly classified as two different people. What may be less obvious, is that the second image of Ms. Kidman was taken from her film, “The Hours” in which she is wearing a prosthetic nose.

Nicole Kidman in "The Hours"
Nicole Kidman in “The Hours”

Here are the overall results from implementations done by teams throughout the world. The dashed line indicates human performance at the same task. People typically think of recognizing faces as a very innate human task, but once again, we can be surprised that machine learning algorithms are now capable of equalling or outperforming humans.

More Human than Human

In his talk, Professor Ng credits two major factors as contributing to the vast improvements in machine learning technology over the past five years. As an analogy, think of launching a rocket into orbit: we need 1) both giant rocket engines and 2) lots of rocket fuel.

The rocket engine in his scenario is the incredible computational performance improvements brought to us by GPU technology. The fuel then, is access to huge amounts of data, coming to us from the prevalence of internet connected sensors, online services, and our society’s march toward digitization.

To perform this astounding face comparison judgment, algorithms are trained on facial data. The flow chart in the image above shows that if the algorithm is not performing well on the training data, we need a more powerful rocket engine.

High Performance Computing
The Talon 2.0 high performance computing system


The second part of the flowchart above illustrates that if the algorithm is learning the training data quite well, but performing poorly when presented with new examples, then perhaps more “rocket fuel” is required, so gathering more training data would be the logical approach to improve the system.

Professor Ng compares the benefits of high performance computing with cloud computing and states that better training performance may be achievable say with 32 GPUs in a co-located rack than by a larger number of servers running in the cloud. His reasoning is that communication latency, server downtime and other failures are more prevalent in cloud-based systems because they are spread out across more machines with more network connections.

Machine learning has been demonstrated to be good at many things. The recent improvements are not limited to face comparisons, in fact in this same talk improvements to Baidu’s speech recognition system were shown to perform well in noisy environments.




Health IT 2015 Summary

Secure Texting

Pager Explosion

Now that the 2015 HIMSS conference in Chicago has wrapped up, I will try to summarize the trends I observed and how Concrete Interactive fits in. It is clear that secure text messaging is a much-needed feature in healthcare. There are at least 2 established companies vehemently pursuing it: TigerText  Imprivata (via their Coretext feature), and several startups presenting at HIMSS: DiagnotesMyCareText, Cotap.

As we know TigerText just closed a $21M VC round. They claim to have 300 enterprise customers mostly in healthcare, including 4 of the largest for-profit hospital chains.

What isn’t clear to me is whether secure messaging is a separate app, or really a feature to be used with EHR (Electronic Health Record) apps that health companies already have. So for example, Imprivata’s CoreText is really positioned more as a feature of their larger system.

However, as a separate app, secure texting following the BYOD (bring your own device) (yes, this is literally the way they talk about it), is a very attractive feature that many people want, and could provide a solid scenario for deeper involvement or integration at a custom development level.

IoT Health

Medical Devices

Another clear area of expansion is in medical device connectivity. For example, Qualcomm Life bought Healthy Circles, a deal supposedly in the $375M range. It is an iPhone app for continuous care. The patient goes home and plugs in a local bluetooth/3g router into a wall outlet. All the continuous care devices (Class 2 FDA approved medical devices) such as blood pressure monitoring, step counting, pulse, temp, glucose monitoring, even home ventilators and other Class 3 (high risk) devices.

The physician gets a portal. The patient can view and augment the data on the iPhone, though the app isn’t even required. This pattern is repeated over and over by other companies: device, connectivity, app, cloud-based portal.

Machine Learning

Eye Code

The industry is only just awakening to the fact that data science will play a big role. Channels of information such as medical devices and apps are beginning to provide the big data they will use. I did make a nice connection at Wolters Kluwer. They are already doing rule-based processing to de-dupe health data. So if a doctor writes COD, they expand that to codeine. But they want to improve their systems via natural language processing (NLP).

I also met with Piers Nash from the University of Chicago at a Genomics SIG. He’s working with NCI and already has 6 Petabytes (PB) of genomic data from >10,000 patients online and available to the public (after a straightforward application process). He’s looking to host algorithms next and run compute cycles from virtual machines (PAAS type like AWS). One basic problem they are trying to improve is referred to as Single Nucleotide Variation calling (SNV calling). The problem is that each person’s DNA is slightly different, because we are different people. The trick is to identify which nucleotide (DNA letters) are different because of normal genetic variation between people, vs. mutations that cause cancer. One interesting aspect of this problem is that as algorithms improve, past recommendations may become invalid. And there may be a liability aspect at work. Samsung genomics was also in attendance at this meeting. They are launching an initiative to sequence tumors and make recommendations, but it’s similar to others already out there, such as Paradigm.

Also at the genomics meeting was Michael Hultner, the Chief Scientist for Lockheed Martin’s health and life sciences division. They are bidding as are many others for the UK’s 100,000 genomes project. He says their expertise lies in the integration of many technologies and thinks they are well positioned in the health space (not just outer space). So it’s fascinating to see the kinds of companies entering or expanding in this market.

Big Picture Strategy

The healthcare IT space is rapidly expanding as healthcare laws such as Meaningful Use Stage II come into effect increase incentives to leverage advances in the technology. There are many land grabs playing out. Any space worth entering will have competition, but based on my assessment of the overall quality in the space, I believe Concrete Interactive is well positioned to innovate, and stand up great apps against much larger players than ourselves.


Bed babes tout their wares
Bed, not booth babes

This year’s Healthcare Information and Management Systems Society conference in Chicago is a veritable candy store of high-tech healthcare. Yes the smart hospital beds and baby monitoring bracelets are fascinating. But perhaps the highest impact, most impressive technology on offer is what you can’t see—the software. Though it has about as much shazam as a bed pan, the coming health communication infrastructure known as HL7 FHIR (pronounced like “fire”) will allow access to the coveted Electronic Health Record (EHR) via many new applications and devices.

An easy to read diagram
An easy to read diagram

Also very impressive and a bit more visible were the beautiful mobile workflow apps like Nextgen’s “Go for iPad.” What I like about this electronic health record and dictation recording tool is that it does not do everything. The heavy lifting of setting up records is done on the desktop (templating in healthcare parlance), and on-the-go actions such as dictation and prescription refills, can be executed in short order on the iPad.

NextGen Go
NextGen Go

I also learned that Greenway, a software provider of Practice Management (PM) and EHR tools, has an app marketplace (think iTunes). Topping their offering is Phreesia, a check-in app for iPad can replace all that form filling in the doctor’s office with a few taps of a touchscreen.

The Internet of Things (IoT) was also present, from Tyco’s tracking bracelets, for babies and elders, to decibel logging sensors that monitor noise levels. Quietyme, a HealthBox and Gener8tor accelerator graduate, establishes a mesh network of small volume monitors in each hospital room, the corridor, nurses station, etc. They perform some fancy data analytics (in partnership with Miosoft and Zero Locus). CEO John Bialk says that by comparing noise levels in patient rooms with patient surveys, they can document and predict which noisy areas are having a negative impact on healing. And from Ascom, voice over internet protocol (VoIP) portable devices are like little cordless phones that nurses can use on the local area network (LAN). Their Android device even supports internet instant messaging.

Mesh networking decibel monitor
Mesh networking decibel monitor

Thank you to all those who visited with Concrete Interactive, and those who described their wonderful products, software, services and innovation.

Chris Isham from Sidus BioData, Chris Andreski from Ascom, Suzy Fulton from Greenway, Bernard Echiverri from Corepoint, Piers Nash from University of Chicago, Ben Bush from Orchard Software, Mark Lynch from Tyco Security Products, Huey Zoroufy from Quietyme, Matt Ward from Imprivata, Stevie Bahu from Modis Health IT, Michael Hultner from Lockheed Martin, Sungsoo Kang from Samsung.


Amazon AWS at HIMSS 2015

Concrete Interactive is available for meetings at HIMSS 2015, the healthcare IT conference in Chicago this April 12-16.

And I know you’ll be almost as excited to learn that for the first time this year Amazon will be making a full-fledged appearance at HIMSS. What’s even more remarkable is that some of the leaders of the AWS HIPAA compliance team, such as Chris Crosbie HIPAA Solutions Architect, Jessie Beegle their Business Development Manager for the Healthcare Industry, and Kenzie Kepper member of the AWS Healthcare Marketing Team will be present and accepting meetings.

You can request a meeting if interested in learning more about hosting HIPAA applications on AWS. Here’s the signup link:

In my experience with the Amazon Popup Loft in San Francisco, the AWS team is very giving of their time and expertise. These aren’t your typical Apple “Genius” types who fall into a prescribed script about fixing your iPhone. The solution architects and technical team members who are available at the Popup Loft are the actual people with inside technical knowledge of the AWS service, and they have been happy to dive into our application details.

So, how does one implement a HIPAA compliant software application on Amazon Web Service? Back when Concrete Interactive built our first HIPAA app in 2012, assigning responsibility across the network infrastructure was quite a challenge. Nowadays, Amazon has drawn a bright line at the hypervisor, the piece of network virtualization software that manages the particular application’s server. Their shared responsibility model ensures from the hypervisor outward, throughout the rest of the AWS network, it is Amazon’s responsibility to secure PHI.

AWS shares responsibility for PHI with Concrete Interactive
AWS shares responsibility for PHI with BAA signatories like Concrete Interactive


AWS specifically supports HIPAA compliant infrastructure through six of their services today: Amazon EC2, Amazon EBS, Amazon S3, Amazon Redshift, Amazon Glacier, and Amazon Elastic Load Balancer.

Specifically on EC2, you must use a dedicated instance. This comes with a higher monthly fee, but it’s peanuts compared with building your own compliant datacenter.

According to Amazon’s HIPAA compliance video, over 600 companies have signed their Business Associates Agreement (including us!) This agreement allows our HIPAA compliant apps to be validated, and shows where PHI responsibility lies, depending on which side of the hypervisor line it is used, stored, or transferred.

If you are interested in meeting with Concrete Interactive at HIMSS 2015, please drop us a line. In partnership with Amazon AWS, and FDA Compliance Advisor David Nettleton, we hope to shed light on any of your HIPAA, healthcare, web or mobile app development questions.

Apple’s ResearchKit Puts Clinical Trials in Your Pocket

Building HIPAA compliant software has never been easy. Modern apps served from the cloud, and enabled for mobile devices presents even greater challenges. But imagine the potential for medical research, given the hundreds of millions of smartphones deployed globally, each equipped with dozens of sensors.

Last year when Apple introduced HealthKit for developers, the iPhone leapt suddenly into the ranks of integrated health tracker, along the lines of Fitbit and Jawbone activity trackers. But the iPhone has one major advantage over most other health tracking devices: built-in internet connectivity.

Whereas with Fitbit, Jawbone, Nike Plus, wifi-enabled scales, blood pressure monitors, and similar devices, users need to complete a multi-step setup process, but the iPhone is ready to send useful data about number of steps walked or run, flights climbed, and many other sensor events straight to the cloud.

The FitBit requires additional software installation.
The FitBit Ultra requires additional software installation.


By providing the iOS Health app for free as part of iOS 8, Apple has given consumers a powerful new toolkit for tracking health data. The only problem is, this data is unavailable to researchers. There has been no way for researchers, doctors, hospitals or health administrators to access health data collected via HealthKit, even if a patient were willing to give consent. Until now…

The iOS Health App
The iOS Health App

ResearchKit, officially launching next month, provides a simplified, streamlined user interface framework for health apps to perform HIPAA-compliant clinical trial consent. According to Apple’s ResearchKit website, “With a user’s consent, ResearchKit can seamlessly tap into the pool of useful data generated by HealthKit — like daily step counts, calorie use, and heart rates — making it accessible to medical researchers.”

Apple has partnered with some impressive names in medical research, listing these on its website: The American Heart Association, Army of Women, Avon Foundation for Women,, Dana-Farber Cancer Institute, Massachusetts General Hospital, Michael J Fox Foundation for Parkinson’s Research, Icahn School of Medicine at Mount Sinai, Penn Medicine, University of Oxford, University of Rochester Medical School, Sage Bionetworks, Stanford Medicine, Susan G Komen, UCLA Jonsson Comprehensive Cancer Center, Weill Cornell Medical College and Xuanwu Hospital Capital Medical University.

So what can ResearchKit do for the researcher? The ResearchKit developer framework is divided into three primary modules: Surveys, Informed Consent, and Active Tasks. A touch-based signature panel allows an app user to perform informed consent right on their mobile device. The survey module provides a builder tool to specify types of questions and answers akin to SurveyMonkey, Google Forms or Wufoo, etc. The Active Tasks module is where active data collection begins.

ResearchKit Signature Panel and Activity Completion

With an active task, ResearchKit allows the user to complete a physical task while the iPhone’s sensors perform active data collection. This data can then be securely transmitted to the cloud for inclusion in the study. For example, Stanford’s MyHeart Counts app has already had tens of thousands of enrollees in just the short time since its launch in March, a feat unequaled by any clinical trial.

This is just the beginning. Data collection will not be limited to the sensors native to the iPhone. External devices, communicating over bluetooth for example, can provide more data such as heart rate, temperature, and weight.

According to VentureBeat, “Google also announced last year that it is developing a contact lens that can measure glucose levels in a person’s tears and transmit these data via an antenna thinner than a human hair.” The New York Times also reports this device is being developed by Google in partnership with Novartis.

Glucose Monitoring Smart Contact Lens
Glucose Monitoring Smart Contact Lens


Machine Learning a new tool for humanity

Machine learning will have intense and amazing impacts on our lives. You may have heard the hype, or the fear mongering. Now let’s take a closer look at what this technology has to offer, and if there is really anything to fear.

First of all machine learning isn’t just one thing, but a broad set of algorithms, tools and techniques combined with advances in computer processing and refined (human) expertise in making decisions based on available data.

There is more data available now than ever before because modern sensor technology has rapidly decreased in price, size and power consumption (witness everything from the iPhone to your car to your washing machine). Revolutionary developments of the past two decades in 3D graphics processors called Graphics Processing Units (or GPUs) make video games and movies more realistic. Interestingly the same mathematics that these GPUs accelerate are also applicable to machine learning (matrix operations).

Finally, today’s learning algorithms including deep neural networks and support vector machines are more advanced than ever and easier to use than ever. Together, the algorithms, the GPUs and the data, allow a kind of pattern recognition and inferencing we call machine learning. Another broad term for the use of this technology is “data science.” In short, machine learning is a new tool for humanity to gain insight into patterns that exist everywhere around us. So what is it good for?

Convolutional neural networks can infer sales revenue figures just by examining images of a store’s parking lot. Other algorithms can find patterns of fraud in credit card purchase data, detect intruders from security camera footage. Fund managers can get a jump on the market, knowing which day to sell huge numbers of shares by making predictions about trading volume at market open. Insurance companies can decide which customers are a better risk by analyzing driving records, and offer discounts to some while raising rates on others. And of course, self-driving cars, then computers that talk and understand, followed by robots that attack us (or will they)?

The human brain is a master of pattern recognition. Imagine how complex the tiny air movements we call sound must be, and yet speaking and understanding our native tongue is remarkably simple. How could a machine learn such a thing? Yet today, tools like Siri, Google Voice, and Nuance can convert speech to text. Translation and understanding are still out of reach.

The power of machine learning lies in algorithmic ability to find patterns in data, in much the same way that we find patterns in images we see, sounds we hear, behaviors we notice. These tools will touch every area of our lives, much the way the invention of the microscope gave us new insights that changed our view of the world. Insight. Whether used for good or for ill, machine learning algorithms are tools that provide insight.

Artificial intelligence, robots taking over the world, these are concerns that are quite a few steps away from the kind of data analysis machine learning algorithms provide. Let’s look more deeply at a simple machine learning problem to understand why. It’s a classic: identifying 3 species of the iris flower. There are 3 common species, as pictured below: Iris Versicolor, Iris Virginica, and Iris Setosa.

We can learn, and a machine learning algorithm can also learn to identify these species fairly reliably. We don’t even need their photographs, just a ruler. We measure a few characteristics such as the length of their petals and sepal length (the flower’s enclosure). Voila, we have data! Here is a link to an actual iris data set.

irises Looking at the images of just a single Iris, it’s fairly easy to see one of these flowers is not like the others (the Setosa). And while the Versicolor and Virginica may look more similar, a quick graph shows as a group that they are different enough to separate as well. And note that the Setosa is even further separated. iris-plotWhat is learning? Differentiating like from other. Identifying new examples as similar to what we know. We learn language by separating the sounds we hear into vowels, consonants, phonemes, words, phrases and meanings. We learn the laws of physics (at first) by experimenting with water, blocks, and the ground. We differentiate behaviors that differentiate a nice full water glass from a spill, a stack of blocks from a mess, and a stroll from again, a spill. Differentiation is a kind of learning.

It is just that kind of learning that machine learning algorithms perform. Not thinking, just the ability to interpret the data an algorithm has seen to make predictions on examples the algorithm hasn’t yet seen. Obviously, there’s a lot more to it than that. Stay tuned for more posts where I will argue both that machine learning will be an incredible tool for humanity, and that it won’t lead to a robot president.

Why Mobile Apps Fail

Adopt an App

When beginning a mobile, web or other app software project, keep in mind it’s more like adopting a pet than building a product. Software needs continual care, maintenance and feature development. Users expect updates — whether to take advantage of the latest mobile Operating System release, to fix a bug that somehow slipped through Quality Assurance, or simply to add features. Building an app is not a “one-time and you’re done” operation.

Product / Market Fit

Consult the experts, whether  Marc Andressen (co-creator of the first web browser), Steve Blank of Stanford, Paul Graham of Y-Combinator or Sean Ellis of LogMeIn, they all agree: it’s about getting the right product, to the right people. That said, your app’s features should depend heavily on who that app is marketed to. To achieve this takes a lot of  “get out of the building” type thinking promoted by Eric Ries in his book The Lean Startup.  Interacting with your market as soon as possible is paramount.

Agile software development process, user-centric design, and Lean thinking can all help you discover what features to build, but all the theory in the world won’t help unless you learn from your market, measure feedback, and build the features that users desire: you must go through this “build, measure, learn” cycle a few times to get it right.

Build, Measure, Learn: How to achieve product market fit by iteratively achieving small milestones
The Build, Measure, Learn Cycle

Accrue Technical Debt Wisely

You may be initially wowed by a software team that can deliver features fast and furiously — especially when you they look cool, and progress comes swiftly — but in just a few short months, a mobile app project can grind to a halt. Why? At the beginning of a project software developers often build features without building the surrounding infrastructure to support them. It’s like building a glorious bathroom complete with steam shower in a house with no plumbing.

The industry term for this is “technical debt.”  While this form of indebtedness can get you a quick jolt of progress, it can also come back to bite you. For quick experiments, technical debt may be the right option, but for meaningful, high-quality app development, building it right means a robust software architecture, infrastructure to support scaling to a massive audience, and putting in place the security necessary to protect both your users and your investment.

Beware Schedule

There is a reason that, on average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted. In a survey of 600 people closely related to a software project, 78% of respondents reported that the “Business is usually or always out of sync with project requirements.” It is extremely difficult to correctly estimate large software projects. So, smart teams have stopped trying.

But, without an estimate, how will your app hit deadlines and a budget? The secret is once again, in the agile process: you can correctly estimate software deliverables over the short term. The agile software development process promotes short “sprints” and we suggest a one-week time period.  This way, your team releases a fully-functional and complete product every week! And since you are learning from your users, what you will do over the next few months, will be in direct response to their usage and feedback. Think of a product initiative as an experiment, where the goal is to learn what a market wants, and deliver it.

Beware Users

Asking your users directly what they want (or don’t want) is a pitfall to avoid.  You are the innovator, and you understand the possibilities for future directions of your product better than anyone, including your users, so asking them is asking for trouble. Instead, simply observe them. Focus groups are notorious not only for their expense, but also their “false sense of science.” Studying users behind a one-way mirror may have worked well in that Mad Men episode, but for software, the “Starbucks Method” is about a million times cheaper, and much more insightful. Get into the cafe, hand out some 20’s, 0r buy people lunch in exchange for watching them use your product.

There is no harm in providing a few in-app survey questions.  Take a look at Qualaroo for how to do this well. For building community, GetSatisfaction is a good bet. Bottom line: Target your questions to the specific user experience, not the overall product.

Beware Apple

As the recent “Downpocalypse” of the Apple Developer Center demonstrated (no new iPhone apps could be created for over a week!) hitching your app to a single horse is a dangerous move. Though an initial iOS app release makes sense in many cases, building with cross-platform mobile technologies like HTML5Adobe PhonegapLudei or Unity, allows your app to diversify its bets; placing expensive native features only where they’re needed.  This way you can release on iOS, Android, the Web, even PC, Mac and gaming consoles.

Choose the Right Team

Select a development team that will maximize your budget and give the best value.  Sometimes the cheapest guy on Craig’s List, ODesk, eLance, or RentACoder is the right way to go — such as with one-off experiments or when a quick and dirty initial draft of an app on a shoestring budget is expected to be tossed and re-written from scratch — but for most projects, it makes sense to proceed with a team that can provide end-to-end services, engage with your users, and help guide you strategically to that nirvana of mass adoption.

5 Industries that Need More Mobile Apps

Over 120 million Americans now have smartphones.  That’s over 40% of the US population.  And almost every one of them is aware that email, Facebook and Gumulon (and the other fun games of the moment) are what they can do. But when you consider that today’s smartphones have more compute power than all of NASA used to send astronauts to the moon, and  capabilities that can sense: location, proximity, acceleration, compass heading, plus two high-resolution cameras, it’s time to start thinking of these devices in new ways that can benefit a wide range of industries. Here are 5 industries we think stand to benefit from these amazing devices, and applications they might employ.

Home Furnishings

Augmented reality is a fancy way of saying that you display a computer generated image over live video from the camera.  When shopping for home furniture, whether at Ikea, Crate and Barrel or wondering how that Eames Lounger will look in your living room, a mobile app can show you what your furniture will look like in your home or office. The amazing 3d graphics of Hollywood movies are now available in handheld form-just imagine seeing how new carpet, flooring, cabinets, appliances, or art will look in this spot—no over there a bit to the left.  Think this is just something of the future?  Check out these augmented reality apps and see for yourself.


You already use your mobile phone with OpenTable, Yelp, Urban Spoon, or maybe just Google Maps, but there’s more to come from the mobile world that will transform not just the restaurant discovery experience, but dining itself, not to mention restaurant ownership and operations.
Soon you will review nearby listings, and be greeted by a local restaurant’s maitre d’.  The special tonight is a fresh seared Ahi, and the chef has a table near the window, which we think you’d enjoy.  Come on down and we’ll send you a free glass of wine and an appetizer.  Oh and by the way, those peanut allergies will not be a problem with any of the items we recommend for you.
A restaurant owner will soon be able to take a snapshot of their menu, and OCR software will instantly update their mobile site, so users walking by can know exactly what’s hot (literally) at this spot.


From electronic health records (EHR) to checking for drug interactions, to refilling prescriptions, both doctors and patients already tote mobile apps in their arsenal, but prepare for future shock when remote diagnosis, doctor-patient video chat, social network support groups, and even health equipment monitoring connects to smart phones and tablets. This is all made possible by recent software technology advances for HIPAA-compliance to protect patient privacy, and digital communication standards such as Health Level 7 (HL7) that allow a wide range of medical devices to talk with each other and external devices.

Industrial Control Systems

Industrial Process Control is a set of devices and software tools that allow factory managers to monitor and control the operation of manufacturin or industrial production equipment.  A new generation of wireless sensor technology, called Zigbee, allows industries to create mesh networks of sensors, so the next time that pressure gauge is reading a bit too high, or a silo level is a bit too low, you’re notified, instantly in your pocket or on your tablet.

Customer Support

Making that phone call to customer support is about as fun as making an appointment for a root canal.  Yet, what if the phone call wasn’t a call at all?  Mobile technologies are being deployed by businesses to make customer support communication fast, but the next step is all about eliminating customer support in favor of customer service. Right now, just sending an @reply on Twitter and many top brands will respond very rapidly (with no hold music).  And when companies think of their customers more like the way they think of partners, your connection to the folks who make, manage and distribute consumer products changes your whole product usage experience.   Image credit: