There are so many things Machine Learning applies to. When Terry Gross spends an hour talking about artificial intelligence on Fresh Air, you know robots and machine learning are top of mind in the public psyche. In her interview with John Markoff, author of, “Machines of Loving Grace,” he cites a broad definition of AI: “A robot can be … a machine that can walk around, or it can be software that is a personal assistant, something like Siri or Cortana or Google Now.”
Modern artificial intelligence and machine learning techniques apply so broadly, they will touch every aspect of our everyday lives (and some already are). At Concrete Interactive, we have chosen to focus on human motion learning—to capture, characterize, and make recommendations on how to improve any movement a human can perform. And just think of how many movements a person can do!
Owing to over a decade of experience with sensors, data acquisition, and control systems, our machine learning techniques are specialized to be able to detect nuanced motion, to see through the noisy signals that sensors produce, and to identify, count and judge accuracy on thousands of different motions. Weightlifting form, the beautifully aligned yoga pose, the perfect golf swing, medical grade fall prediction are all examples of how machine learning can help people perform better, improve faster and reduce injury.
Beyond Steps & Sleep
What sensors can we use? The accelerometers that are already in the smartphone in your pocket or purse to start! And the wearable on your wrist, the one clipped to your shoe, and the one coming that none of us even know about.
Our platform is designed to integrate with many wearables.
Think of Photoshop, where you can edit images captured with many different cameras, at many different resolutions, in many formats. Similarly, wearables may have different numbers of sensors, they may be worn on different parts of the body, but they are all measuring the same motions we wish to track.
“Even rocks compute,” a friend once remarked. I won’t comment on either of our sobriety, but at the time I just shot him a knowing look and nodded contemplatively. But what did he mean?
Maybe he meant that the reflections and refractions off rocks, or even better, crystals, perform a sort of mapping. The light waves come in from one direction, then reflect, refract, scatter, and project on to surrounding surfaces. Sometimes these light projections are beautiful, regular, even useful.
Light, being a vibration of electromagnetic radiation in the optical part of the spectrum, has a higher frequency, than sound waves, but like light, sound is a vibrational energy that imprints change in the resonance of objects around it. Taken this way, the quandary of the tree falling in the forest resolves in the affirmative, because someone is always there to witness it, namely the tree and forest itself.
The field of archeoacoustics explores the innate acoustical properties of artifacts via audio analysis—a study of the essential vibrational qualities of artifacts and environments. In examination of a 1969 claim by Richard G. Woodbridge III, the Mythbusters report it is in fact not possible to resurrect ancient roman voices from the grooves carved by their tools as they spun around a potter’s wheel 6,500 years ago. Maybe not yet anyway.
The vibrations from our voices to our footsteps, are like tiny tremors impacting the matter around us. And whether this matter is capable of recording them may have more to do with our playback technology than the indisputable fact that what we do influences the things around us.
As early as circa 1902, mathematician Charles Sanders Peirce wrote, “Give science only a hundred more centuries of increase in geometrical progression, and she may be expected to find that the sound waves of Aristotle’s voice have somehow recorded themselves.”
A rock, a tree, the earth—animals have used these as computational devices by echolocation long before humans evolved. Zoologists Roger Payne and Douglas Webb calculated that before ship traffic noise permeated the oceans, tones emitted by fin whales could have traveled as far as four thousand miles and still be heard against the normal background noise of the sea. Whales, bats, dolphins, and recently even some blind humans use echolocation to “see” objects by listening to reflections of sounds they themselves emit. The computation is performed by the reflecting objects, transforming the sound energy as it was emitted, by shifting frequency and waveform to imprint this energy with the characteristics of the objects that have influenced it such as distance, size, hardness, maybe even “tastiness”?
Is there an essential vibrational signature to all things that can be elucidated through computation? Yes, there is. And the next part of this series will introduce the modern machine learning techniques and sensor technologies being employed to further illuminate the useful (and perhaps mystical) properties in everything around us.
Brett Bond is President of Concrete Interactive, a software development and machine learning firm based in San Francisco and Santa Monica. When not writing software, Brett enjoys practicing yoga, preparing the nursery for his soon-to-arrive daughter, and building large-scale fire displays.
Now that the 2015 HIMSS conference in Chicago has wrapped up, I will try to summarize the trends I observed and how Concrete Interactive fits in. It is clear that secure text messaging is a much-needed feature in healthcare. There are at least 2 established companies vehemently pursuing it: TigerTextImprivata (via their Coretext feature), and several startups presenting at HIMSS: Diagnotes, MyCareText, Cotap.
As we know TigerText just closed a $21M VC round. They claim to have 300 enterprise customers mostly in healthcare, including 4 of the largest for-profit hospital chains.
What isn’t clear to me is whether secure messaging is a separate app, or really a feature to be used with EHR (Electronic Health Record) apps that health companies already have. So for example, Imprivata’s CoreText is really positioned more as a feature of their larger system.
However, as a separate app, secure texting following the BYOD (bring your own device) (yes, this is literally the way they talk about it), is a very attractive feature that many people want, and could provide a solid scenario for deeper involvement or integration at a custom development level.
Another clear area of expansion is in medical device connectivity. For example, Qualcomm Life bought Healthy Circles, a deal supposedly in the $375M range. It is an iPhone app for continuous care. The patient goes home and plugs in a local bluetooth/3g router into a wall outlet. All the continuous care devices (Class 2 FDA approved medical devices) such as blood pressure monitoring, step counting, pulse, temp, glucose monitoring, even home ventilators and other Class 3 (high risk) devices.
The physician gets a portal. The patient can view and augment the data on the iPhone, though the app isn’t even required. This pattern is repeated over and over by other companies: device, connectivity, app, cloud-based portal.
The industry is only just awakening to the fact that data science will play a big role. Channels of information such as medical devices and apps are beginning to provide the big data they will use. I did make a nice connection at Wolters Kluwer. They are already doing rule-based processing to de-dupe health data. So if a doctor writes COD, they expand that to codeine. But they want to improve their systems via natural language processing (NLP).
I also met with Piers Nash from the University of Chicago at a Genomics SIG. He’s working with NCI and already has 6 Petabytes (PB) of genomic data from >10,000 patients online and available to the public (after a straightforward application process). He’s looking to host algorithms next and run compute cycles from virtual machines (PAAS type like AWS). One basic problem they are trying to improve is referred to as Single Nucleotide Variation calling (SNV calling). The problem is that each person’s DNA is slightly different, because we are different people. The trick is to identify which nucleotide (DNA letters) are different because of normal genetic variation between people, vs. mutations that cause cancer. One interesting aspect of this problem is that as algorithms improve, past recommendations may become invalid. And there may be a liability aspect at work. Samsung genomics was also in attendance at this meeting. They are launching an initiative to sequence tumors and make recommendations, but it’s similar to others already out there, such as Paradigm.
Also at the genomics meeting was Michael Hultner, the Chief Scientist for Lockheed Martin’s health and life sciences division. They are bidding as are many others for the UK’s 100,000 genomes project. He says their expertise lies in the integration of many technologies and thinks they are well positioned in the health space (not just outer space). So it’s fascinating to see the kinds of companies entering or expanding in this market.
Big Picture Strategy
The healthcare IT space is rapidly expanding as healthcare laws such as Meaningful Use Stage II come into effect increase incentives to leverage advances in the technology. There are many land grabs playing out. Any space worth entering will have competition, but based on my assessment of the overall quality in the space, I believe Concrete Interactive is well positioned to innovate, and stand up great apps against much larger players than ourselves.
This year’s Healthcare Information and Management Systems Society conference in Chicago is a veritable candy store of high-tech healthcare. Yes the smart hospital beds and baby monitoring bracelets are fascinating. But perhaps the highest impact, most impressive technology on offer is what you can’t see—the software. Though it has about as much shazam as a bed pan, the coming health communication infrastructure known as HL7 FHIR (pronounced like “fire”) will allow access to the coveted Electronic Health Record (EHR) via many new applications and devices.
Also very impressive and a bit more visible were the beautiful mobile workflow apps like Nextgen’s “Go for iPad.” What I like about this electronic health record and dictation recording tool is that it does not do everything. The heavy lifting of setting up records is done on the desktop (templating in healthcare parlance), and on-the-go actions such as dictation and prescription refills, can be executed in short order on the iPad.
I also learned that Greenway, a software provider of Practice Management (PM) and EHR tools, has an app marketplace (think iTunes). Topping their offering is Phreesia, a check-in app for iPad can replace all that form filling in the doctor’s office with a few taps of a touchscreen.
The Internet of Things (IoT) was also present, from Tyco’s tracking bracelets, for babies and elders, to decibel logging sensors that monitor noise levels. Quietyme, a HealthBox and Gener8tor accelerator graduate, establishes a mesh network of small volume monitors in each hospital room, the corridor, nurses station, etc. They perform some fancy data analytics (in partnership with Miosoft and Zero Locus). CEO John Bialk says that by comparing noise levels in patient rooms with patient surveys, they can document and predict which noisy areas are having a negative impact on healing. And from Ascom, voice over internet protocol (VoIP) portable devices are like little cordless phones that nurses can use on the local area network (LAN). Their Android device even supports internet instant messaging.
Thank you to all those who visited with Concrete Interactive, and those who described their wonderful products, software, services and innovation.
Concrete Interactive is available for meetings at HIMSS 2015, the healthcare IT conference in Chicago this April 12-16.
And I know you’ll be almost as excited to learn that for the first time this year Amazon will be making a full-fledged appearance at HIMSS. What’s even more remarkable is that some of the leaders of the AWS HIPAA compliance team, such as Chris Crosbie HIPAA Solutions Architect, Jessie Beegle their Business Development Manager for the Healthcare Industry, and Kenzie Kepper member of the AWS Healthcare Marketing Team will be present and accepting meetings.
In my experience with the Amazon Popup Loft in San Francisco, the AWS team is very giving of their time and expertise. These aren’t your typical Apple “Genius” types who fall into a prescribed script about fixing your iPhone. The solution architects and technical team members who are available at the Popup Loft are the actual people with inside technical knowledge of the AWS service, and they have been happy to dive into our application details.
So, how does one implement a HIPAA compliant software application on Amazon Web Service? Back when Concrete Interactive built our first HIPAA app in 2012, assigning responsibility across the network infrastructure was quite a challenge. Nowadays, Amazon has drawn a bright line at the hypervisor, the piece of network virtualization software that manages the particular application’s server. Their shared responsibility model ensures from the hypervisor outward, throughout the rest of the AWS network, it is Amazon’s responsibility to secure PHI.
Specifically on EC2, you must use a dedicated instance. This comes with a higher monthly fee, but it’s peanuts compared with building your own compliant datacenter.
According to Amazon’s HIPAA compliance video, over 600 companies have signed their Business Associates Agreement (including us!) This agreement allows our HIPAA compliant apps to be validated, and shows where PHI responsibility lies, depending on which side of the hypervisor line it is used, stored, or transferred.
If you are interested in meeting with Concrete Interactive at HIMSS 2015, please drop us a line. In partnership with Amazon AWS, and FDA Compliance Advisor David Nettleton, we hope to shed light on any of your HIPAA, healthcare, web or mobile app development questions.
Machine learning will have intense and amazing impacts on our lives. You may have heard the hype, or the fear mongering. Now let’s take a closer look at what this technology has to offer, and if there is really anything to fear.
First of all machine learning isn’t just one thing, but a broad set of algorithms, tools and techniques combined with advances in computer processing and refined (human) expertise in making decisions based on available data.
There is more data available now than ever before because modern sensor technology has rapidly decreased in price, size and power consumption (witness everything from the iPhone to your car to your washing machine). Revolutionary developments of the past two decades in 3D graphics processors called Graphics Processing Units (or GPUs) make video games and movies more realistic. Interestingly the same mathematics that these GPUs accelerate are also applicable to machine learning (matrix operations).
Finally, today’s learning algorithms including deep neural networks and support vector machines are more advanced than ever and easier to use than ever. Together, the algorithms, the GPUs and the data, allow a kind of pattern recognition and inferencing we call machine learning. Another broad term for the use of this technology is “data science.” In short, machine learning is a new tool for humanity to gain insight into patterns that exist everywhere around us. So what is it good for?
The human brain is a master of pattern recognition. Imagine how complex the tiny air movements we call sound must be, and yet speaking and understanding our native tongue is remarkably simple. How could a machine learn such a thing? Yet today, tools like Siri, Google Voice, and Nuance can convert speech to text. Translation and understanding are still out of reach.
The power of machine learning lies in algorithmic ability to find patterns in data, in much the same way that we find patterns in images we see, sounds we hear, behaviors we notice. These tools will touch every area of our lives, much the way the invention of the microscope gave us new insights that changed our view of the world. Insight. Whether used for good or for ill, machine learning algorithms are tools that provide insight.
Artificial intelligence, robots taking over the world, these are concerns that are quite a few steps away from the kind of data analysis machine learning algorithms provide. Let’s look more deeply at a simple machine learning problem to understand why. It’s a classic: identifying 3 species of the iris flower. There are 3 common species, as pictured below: Iris Versicolor, Iris Virginica, and Iris Setosa.
We can learn, and a machine learning algorithm can also learn to identify these species fairly reliably. We don’t even need their photographs, just a ruler. We measure a few characteristics such as the length of their petals and sepal length (the flower’s enclosure). Voila, we have data! Here is a link to an actual iris data set.
Looking at the images of just a single Iris, it’s fairly easy to see one of these flowers is not like the others (the Setosa). And while the Versicolor and Virginica may look more similar, a quick graph shows as a group that they are different enough to separate as well. And note that the Setosa is even further separated. What is learning? Differentiating like from other. Identifying new examples as similar to what we know. We learn language by separating the sounds we hear into vowels, consonants, phonemes, words, phrases and meanings. We learn the laws of physics (at first) by experimenting with water, blocks, and the ground. We differentiate behaviors that differentiate a nice full water glass from a spill, a stack of blocks from a mess, and a stroll from again, a spill. Differentiation is a kind of learning.
It is just that kind of learning that machine learning algorithms perform. Not thinking, just the ability to interpret the data an algorithm has seen to make predictions on examples the algorithm hasn’t yet seen. Obviously, there’s a lot more to it than that. Stay tuned for more posts where I will argue both that machine learning will be an incredible tool for humanity, and that it won’t lead to a robot president.
When beginning a mobile, web or other app software project, keep in mind it’s more like adopting a pet than building a product. Software needs continual care, maintenance and feature development. Users expect updates — whether to take advantage of the latest mobile Operating System release, to fix a bug that somehow slipped through Quality Assurance, or simply to add features. Building an app is not a “one-time and you’re done” operation.
Product / Market Fit
Consult the experts, whether Marc Andressen (co-creator of the first web browser), Steve Blank of Stanford, Paul Graham of Y-Combinator or Sean Ellis of LogMeIn, they all agree: it’s about getting the right product, to the right people. That said, your app’s features should depend heavily on who that app is marketed to. To achieve this takes a lot of “get out of the building” type thinking promoted by Eric Ries in his book The Lean Startup. Interacting with your market as soon as possible is paramount.
Agile software development process, user-centric design, and Lean thinking can all help you discover what features to build, but all the theory in the world won’t help unless you learn from your market, measure feedback, and build the features that users desire: you must go through this “build, measure, learn” cycle a few times to get it right.
Accrue Technical Debt Wisely
You may be initially wowed by a software team that can deliver features fast and furiously — especially when you they look cool, and progress comes swiftly — but in just a few short months, a mobile app project can grind to a halt. Why? At the beginning of a project software developers often build features without building the surrounding infrastructure to support them. It’s like building a glorious bathroom complete with steam shower in a house with no plumbing.
The industry term for this is “technical debt.” While this form of indebtedness can get you a quick jolt of progress, it can also come back to bite you. For quick experiments, technical debt may be the right option, but for meaningful, high-quality app development, building it right means a robust software architecture, infrastructure to support scaling to a massive audience, and putting in place the security necessary to protect both your users and your investment.
There is a reason that, on average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted. In a survey of 600 people closely related to a software project, 78% of respondents reported that the “Business is usually or always out of sync with project requirements.” It is extremely difficult to correctly estimate large software projects. So, smart teams have stopped trying.
But, without an estimate, how will your app hit deadlines and a budget? The secret is once again, in the agile process: you can correctly estimate software deliverables over the short term. The agile software development process promotes short “sprints” and we suggest a one-week time period. This way, your team releases a fully-functional and complete product every week! And since you are learning from your users, what you will do over the next few months, will be in direct response to their usage and feedback. Think of a product initiative as an experiment, where the goal is to learn what a market wants, and deliver it.
Asking your users directly what they want (or don’t want) is a pitfall to avoid. You are the innovator, and you understand the possibilities for future directions of your product better than anyone, including your users, so asking them is asking for trouble. Instead, simply observe them. Focus groups are notorious not only for their expense, but also their “false sense of science.” Studying users behind a one-way mirror may have worked well in that Mad Men episode, but for software, the “Starbucks Method” is about a million times cheaper, and much more insightful. Get into the cafe, hand out some 20’s, 0r buy people lunch in exchange for watching them use your product.
There is no harm in providing a few in-app survey questions. Take a look at Qualaroo for how to do this well. For building community, GetSatisfaction is a good bet. Bottom line: Target your questions to the specific user experience, not the overall product.
As the recent “Downpocalypse” of the Apple Developer Center demonstrated (no new iPhone apps could be created for over a week!) hitching your app to a single horse is a dangerous move. Though an initial iOS app release makes sense in many cases, building with cross-platform mobile technologies like HTML5, Adobe Phonegap, Ludei or Unity, allows your app to diversify its bets; placing expensive native features only where they’re needed. This way you can release on iOS, Android, the Web, even PC, Mac and gaming consoles.
Choose the Right Team
Select a development team that will maximize your budget and give the best value. Sometimes the cheapest guy on Craig’s List, ODesk, eLance, or RentACoder is the right way to go — such as with one-off experiments or when a quick and dirty initial draft of an app on a shoestring budget is expected to be tossed and re-written from scratch — but for most projects, it makes sense to proceed with a team that can provide end-to-end services, engage with your users, and help guide you strategically to that nirvana of mass adoption.
Look around you and you will see thousands of “things” all within your immediate vicinity. Your keychain. Your desk chair. Your favorite coffee mug filled with Italian Roast coffee. Your dying ficus plant. With today’s technology, there is no reason why these things cannot communicate with you, in-real time.
Your plant should tell you, not only that it needs water, but how much and what position to place it during the day
Your coffee mug should know what kind of roast you want to drink today
You should be able to find your keys at a moment’s notice because you have a bad habit of misplacing them the moment you are about to go somewhere
Your desk chair should automatically adjust itself when it detects you are sitting with poor posture (reminder: stop slouching)
As luck would have it, there are technologies for each one of these things, being built. Right. Now. (See for yourself: Plant | Coffee | Keys | Chair).
The Hardware (R)evolution.
While cliche, the world we are living in is becoming increasingly more connected, more now than ever before. While in research in development only a few years ago, technologies like RFID, NFC, and Zigbee are enabling the next generation of connected devices in a cost and energy efficient way. In fact, consumer goods that weren’t previously connected 18 months ago are now online. Recent examples, include:
Connected lightbulbs allow you to change its hue and control the lighting ambiance of your room via a mobile app
As enabling technologies become cheaper and smaller, companies will be forced to innovate and think about how their offline products can get online.
Getting offline products into the 21st century is only the tip of the iceberg. Enhancing these products with connected technologies has to transform the product experience, be personal, and have utility. The bar for product experiences is so high, not executing against these objectives will result in a gimmicky, failure of an experience.
For example: A shoe company may want to create a running shoe with a GPS Dot. These “online shoes” should not only track where (and how long) the user was running, but it should provide actionable insights based on what the shoe company already knows about you: recommend running trails based on your running style and preferences, alert you when your friends are close by, give you a discount if you walk by a their store, and let you know how hard to run based on your body fat and weight goals.
Utility vs. Privacy
Privacy generally is a topic of concern when more devices become online and “all knowing.” As we’ve seen from the internet and media today, and in light of recent NSA privacy concerns, users are willing to give up certain liberties to connect with friends (Facebook), share their thoughts (Twitter), utilize free email (Google), or make free international calls over the internet (Microsoft/Skype). We believe its important for companies who are contemplating an online product strategy understand these implications and balance the utility an online product with the user’s privacy and the company’s ethics/values.
Lately, retailers (both large and small) have seemed to focus on their omni-channel strategy: leveraging social channels to drive traffic, traditional mediums to promote cross-channel awareness, and e-commerce to streamline the transaction process.
But what about the mobile touch point of the customer experience? These days, many retailers have a mobile app: but is this the right mobile app for you and your customer? In an age where nearly 58% of customers conduct online/social research prior to purchasing an item at a brick-and-mortar retail store, retailers should be thinking about how their app can (a) enhance the customer experience and (b) streamline the path to purchase. Sometimes these objectives are one in the same. Here are some questions to ask yourself when developing a mobile strategy for your retail environment.
What are your customer’s pain points? Every retailer is different: Different store layouts, different SKUs, different check-out process. As a retailer, you should ask your customers what their biggest pain points are when shopping in your brick-and-mortar retail store. By the same token, you should also ask yourself how you can solve this pain point with the customer’s mobile device. Some scenarios to think about:
Are your checkout lines too long?
Do they want to know what is on sale?
Do your customers need help with an item?
Do your customers need help navigating your store to find a category or SKU?
Would your customers prefer a ship-to-home option rather than hauling the item in their car?
Do your customers want to know what the price is of an item?
Why should your customer use your app? Once you’ve figured out your customer’s pain points, you should ask yourself why a customer should (a) download and (b) use your app. With thousands of apps on the market, and room for ~20 apps on the user’s Home screen, a better question may be: Why will the customer want to use your app more than once?
Do your app solve the problem (above) in a way that enhances the customer experience?
Do customers who use your app have a significant advantage over customers who don’t use it?
Does the customer receive value in the form of discounts or loyalty rewards?
Does the app enhance the offline and cross-channel customer experience?
Operationalizing the mobile experience to wow your customers
In some, or maybe all, of these scenarios the retailer may need to operationalize the experience around the mobile app. For example:
How do you handle loss prevention if you implement mobile check-out?
How do you greet loyal customers who enter your store?
How do you redeem loyalty rewards via the mobile app for a customer who is ready to check-out?
Do you offer flash-sales for customers who scan a SKU using their mobile phone, based on their purchasing history?
We believe the best mobile experiences are the ones that “start with the end” — and in the case of retail, we believe starting with the desired customer experiencein the context of mobile, will help bring brick-and-mortar retailing to the 21st century.
Over 120 million Americans now have smartphones. That’s over 40% of the US population. And almost every one of them is aware that email, Facebook and Gumulon (and the other fun games of the moment) are what they can do. But when you consider that today’s smartphones have more compute power than all of NASA used to send astronauts to the moon, and capabilities that can sense: location, proximity, acceleration, compass heading, plus two high-resolution cameras, it’s time to start thinking of these devices in new ways that can benefit a wide range of industries. Here are 5 industries we think stand to benefit from these amazing devices, and applications they might employ.
Augmented reality is a fancy way of saying that you display a computer generated image over live video from the camera. When shopping for home furniture, whether at Ikea, Crate and Barrel or wondering how that Eames Lounger will look in your living room, a mobile app can show you what your furniture will look like in your home or office. The amazing 3d graphics of Hollywood movies are now available in handheld form-just imagine seeing how new carpet, flooring, cabinets, appliances, or art will look in this spot—no over there a bit to the left. Think this is just something of the future? Check out these augmented reality apps and see for yourself.
Soon you will review nearby listings, and be greeted by a local restaurant’s maitre d’. The special tonight is a fresh seared Ahi, and the chef has a table near the window, which we think you’d enjoy. Come on down and we’ll send you a free glass of wine and an appetizer. Oh and by the way, those peanut allergies will not be a problem with any of the items we recommend for you.
A restaurant owner will soon be able to take a snapshot of their menu, and OCR software will instantly update their mobile site, so users walking by can know exactly what’s hot (literally) at this spot.
From electronic health records (EHR) to checking for drug interactions, to refilling prescriptions, both doctors and patients already tote mobile apps in their arsenal, but prepare for future shock when remote diagnosis, doctor-patient video chat, social network support groups, and even health equipment monitoring connects to smart phones and tablets. This is all made possible by recent software technology advances for HIPAA-compliance to protect patient privacy, and digital communication standards such as Health Level 7 (HL7) that allow a wide range of medical devices to talk with each other and external devices.
Industrial Control Systems
Industrial Process Control is a set of devices and software tools that allow factory managers to monitor and control the operation of manufacturin or industrial production equipment. A new generation of wireless sensor technology, called Zigbee, allows industries to create mesh networks of sensors, so the next time that pressure gauge is reading a bit too high, or a silo level is a bit too low, you’re notified, instantly in your pocket or on your tablet.
Making that phone call to customer support is about as fun as making an appointment for a root canal. Yet, what if the phone call wasn’t a call at all? Mobile technologies are being deployed by businesses to make customer support communication fast, but the next step is all about eliminating customer support in favor of customer service. Right now, just sending an @reply on Twitter and many top brands will respond very rapidly (with no hold music). And when companies think of their customers more like the way they think of partners, your connection to the folks who make, manage and distribute consumer products changes your whole product usage experience. Image credit: http://www.techspins.com/retina-vs-super-amoled