Amazon AWS at HIMSS 2015

Concrete Interactive is available for meetings at HIMSS 2015, the healthcare IT conference in Chicago this April 12-16.

And I know you’ll be almost as excited to learn that for the first time this year Amazon will be making a full-fledged appearance at HIMSS. What’s even more remarkable is that some of the leaders of the AWS HIPAA compliance team, such as Chris Crosbie HIPAA Solutions Architect, Jessie Beegle their Business Development Manager for the Healthcare Industry, and Kenzie Kepper member of the AWS Healthcare Marketing Team will be present and accepting meetings.

You can request a meeting if interested in learning more about hosting HIPAA applications on AWS. Here’s the signup link:

In my experience with the Amazon Popup Loft in San Francisco, the AWS team is very giving of their time and expertise. These aren’t your typical Apple “Genius” types who fall into a prescribed script about fixing your iPhone. The solution architects and technical team members who are available at the Popup Loft are the actual people with inside technical knowledge of the AWS service, and they have been happy to dive into our application details.

So, how does one implement a HIPAA compliant software application on Amazon Web Service? Back when Concrete Interactive built our first HIPAA app in 2012, assigning responsibility across the network infrastructure was quite a challenge. Nowadays, Amazon has drawn a bright line at the hypervisor, the piece of network virtualization software that manages the particular application’s server. Their shared responsibility model ensures from the hypervisor outward, throughout the rest of the AWS network, it is Amazon’s responsibility to secure PHI.

AWS shares responsibility for PHI with Concrete Interactive
AWS shares responsibility for PHI with BAA signatories like Concrete Interactive


AWS specifically supports HIPAA compliant infrastructure through six of their services today: Amazon EC2, Amazon EBS, Amazon S3, Amazon Redshift, Amazon Glacier, and Amazon Elastic Load Balancer.

Specifically on EC2, you must use a dedicated instance. This comes with a higher monthly fee, but it’s peanuts compared with building your own compliant datacenter.

According to Amazon’s HIPAA compliance video, over 600 companies have signed their Business Associates Agreement (including us!) This agreement allows our HIPAA compliant apps to be validated, and shows where PHI responsibility lies, depending on which side of the hypervisor line it is used, stored, or transferred.

If you are interested in meeting with Concrete Interactive at HIMSS 2015, please drop us a line. In partnership with Amazon AWS, and FDA Compliance Advisor David Nettleton, we hope to shed light on any of your HIPAA, healthcare, web or mobile app development questions.

Apple’s ResearchKit Puts Clinical Trials in Your Pocket

Building HIPAA compliant software has never been easy. Modern apps served from the cloud, and enabled for mobile devices presents even greater challenges. But imagine the potential for medical research, given the hundreds of millions of smartphones deployed globally, each equipped with dozens of sensors.

Last year when Apple introduced HealthKit for developers, the iPhone leapt suddenly into the ranks of integrated health tracker, along the lines of Fitbit and Jawbone activity trackers. But the iPhone has one major advantage over most other health tracking devices: built-in internet connectivity.

Whereas with Fitbit, Jawbone, Nike Plus, wifi-enabled scales, blood pressure monitors, and similar devices, users need to complete a multi-step setup process, but the iPhone is ready to send useful data about number of steps walked or run, flights climbed, and many other sensor events straight to the cloud.

The FitBit requires additional software installation.
The FitBit Ultra requires additional software installation.


By providing the iOS Health app for free as part of iOS 8, Apple has given consumers a powerful new toolkit for tracking health data. The only problem is, this data is unavailable to researchers. There has been no way for researchers, doctors, hospitals or health administrators to access health data collected via HealthKit, even if a patient were willing to give consent. Until now…

The iOS Health App
The iOS Health App

ResearchKit, officially launching next month, provides a simplified, streamlined user interface framework for health apps to perform HIPAA-compliant clinical trial consent. According to Apple’s ResearchKit website, “With a user’s consent, ResearchKit can seamlessly tap into the pool of useful data generated by HealthKit — like daily step counts, calorie use, and heart rates — making it accessible to medical researchers.”

Apple has partnered with some impressive names in medical research, listing these on its website: The American Heart Association, Army of Women, Avon Foundation for Women,, Dana-Farber Cancer Institute, Massachusetts General Hospital, Michael J Fox Foundation for Parkinson’s Research, Icahn School of Medicine at Mount Sinai, Penn Medicine, University of Oxford, University of Rochester Medical School, Sage Bionetworks, Stanford Medicine, Susan G Komen, UCLA Jonsson Comprehensive Cancer Center, Weill Cornell Medical College and Xuanwu Hospital Capital Medical University.

So what can ResearchKit do for the researcher? The ResearchKit developer framework is divided into three primary modules: Surveys, Informed Consent, and Active Tasks. A touch-based signature panel allows an app user to perform informed consent right on their mobile device. The survey module provides a builder tool to specify types of questions and answers akin to SurveyMonkey, Google Forms or Wufoo, etc. The Active Tasks module is where active data collection begins.

ResearchKit Signature Panel and Activity Completion

With an active task, ResearchKit allows the user to complete a physical task while the iPhone’s sensors perform active data collection. This data can then be securely transmitted to the cloud for inclusion in the study. For example, Stanford’s MyHeart Counts app has already had tens of thousands of enrollees in just the short time since its launch in March, a feat unequaled by any clinical trial.

This is just the beginning. Data collection will not be limited to the sensors native to the iPhone. External devices, communicating over bluetooth for example, can provide more data such as heart rate, temperature, and weight.

According to VentureBeat, “Google also announced last year that it is developing a contact lens that can measure glucose levels in a person’s tears and transmit these data via an antenna thinner than a human hair.” The New York Times also reports this device is being developed by Google in partnership with Novartis.

Glucose Monitoring Smart Contact Lens
Glucose Monitoring Smart Contact Lens


Machine Learning a new tool for humanity

Machine learning will have intense and amazing impacts on our lives. You may have heard the hype, or the fear mongering. Now let’s take a closer look at what this technology has to offer, and if there is really anything to fear.

First of all machine learning isn’t just one thing, but a broad set of algorithms, tools and techniques combined with advances in computer processing and refined (human) expertise in making decisions based on available data.

There is more data available now than ever before because modern sensor technology has rapidly decreased in price, size and power consumption (witness everything from the iPhone to your car to your washing machine). Revolutionary developments of the past two decades in 3D graphics processors called Graphics Processing Units (or GPUs) make video games and movies more realistic. Interestingly the same mathematics that these GPUs accelerate are also applicable to machine learning (matrix operations).

Finally, today’s learning algorithms including deep neural networks and support vector machines are more advanced than ever and easier to use than ever. Together, the algorithms, the GPUs and the data, allow a kind of pattern recognition and inferencing we call machine learning. Another broad term for the use of this technology is “data science.” In short, machine learning is a new tool for humanity to gain insight into patterns that exist everywhere around us. So what is it good for?

Convolutional neural networks can infer sales revenue figures just by examining images of a store’s parking lot. Other algorithms can find patterns of fraud in credit card purchase data, detect intruders from security camera footage. Fund managers can get a jump on the market, knowing which day to sell huge numbers of shares by making predictions about trading volume at market open. Insurance companies can decide which customers are a better risk by analyzing driving records, and offer discounts to some while raising rates on others. And of course, self-driving cars, then computers that talk and understand, followed by robots that attack us (or will they)?

The human brain is a master of pattern recognition. Imagine how complex the tiny air movements we call sound must be, and yet speaking and understanding our native tongue is remarkably simple. How could a machine learn such a thing? Yet today, tools like Siri, Google Voice, and Nuance can convert speech to text. Translation and understanding are still out of reach.

The power of machine learning lies in algorithmic ability to find patterns in data, in much the same way that we find patterns in images we see, sounds we hear, behaviors we notice. These tools will touch every area of our lives, much the way the invention of the microscope gave us new insights that changed our view of the world. Insight. Whether used for good or for ill, machine learning algorithms are tools that provide insight.

Artificial intelligence, robots taking over the world, these are concerns that are quite a few steps away from the kind of data analysis machine learning algorithms provide. Let’s look more deeply at a simple machine learning problem to understand why. It’s a classic: identifying 3 species of the iris flower. There are 3 common species, as pictured below: Iris Versicolor, Iris Virginica, and Iris Setosa.

We can learn, and a machine learning algorithm can also learn to identify these species fairly reliably. We don’t even need their photographs, just a ruler. We measure a few characteristics such as the length of their petals and sepal length (the flower’s enclosure). Voila, we have data! Here is a link to an actual iris data set.

irises Looking at the images of just a single Iris, it’s fairly easy to see one of these flowers is not like the others (the Setosa). And while the Versicolor and Virginica may look more similar, a quick graph shows as a group that they are different enough to separate as well. And note that the Setosa is even further separated. iris-plotWhat is learning? Differentiating like from other. Identifying new examples as similar to what we know. We learn language by separating the sounds we hear into vowels, consonants, phonemes, words, phrases and meanings. We learn the laws of physics (at first) by experimenting with water, blocks, and the ground. We differentiate behaviors that differentiate a nice full water glass from a spill, a stack of blocks from a mess, and a stroll from again, a spill. Differentiation is a kind of learning.

It is just that kind of learning that machine learning algorithms perform. Not thinking, just the ability to interpret the data an algorithm has seen to make predictions on examples the algorithm hasn’t yet seen. Obviously, there’s a lot more to it than that. Stay tuned for more posts where I will argue both that machine learning will be an incredible tool for humanity, and that it won’t lead to a robot president.

5 Industries that Need More Mobile Apps

Over 120 million Americans now have smartphones.  That’s over 40% of the US population.  And almost every one of them is aware that email, Facebook and Gumulon (and the other fun games of the moment) are what they can do. But when you consider that today’s smartphones have more compute power than all of NASA used to send astronauts to the moon, and  capabilities that can sense: location, proximity, acceleration, compass heading, plus two high-resolution cameras, it’s time to start thinking of these devices in new ways that can benefit a wide range of industries. Here are 5 industries we think stand to benefit from these amazing devices, and applications they might employ.

Home Furnishings

Augmented reality is a fancy way of saying that you display a computer generated image over live video from the camera.  When shopping for home furniture, whether at Ikea, Crate and Barrel or wondering how that Eames Lounger will look in your living room, a mobile app can show you what your furniture will look like in your home or office. The amazing 3d graphics of Hollywood movies are now available in handheld form-just imagine seeing how new carpet, flooring, cabinets, appliances, or art will look in this spot—no over there a bit to the left.  Think this is just something of the future?  Check out these augmented reality apps and see for yourself.


You already use your mobile phone with OpenTable, Yelp, Urban Spoon, or maybe just Google Maps, but there’s more to come from the mobile world that will transform not just the restaurant discovery experience, but dining itself, not to mention restaurant ownership and operations.
Soon you will review nearby listings, and be greeted by a local restaurant’s maitre d’.  The special tonight is a fresh seared Ahi, and the chef has a table near the window, which we think you’d enjoy.  Come on down and we’ll send you a free glass of wine and an appetizer.  Oh and by the way, those peanut allergies will not be a problem with any of the items we recommend for you.
A restaurant owner will soon be able to take a snapshot of their menu, and OCR software will instantly update their mobile site, so users walking by can know exactly what’s hot (literally) at this spot.


From electronic health records (EHR) to checking for drug interactions, to refilling prescriptions, both doctors and patients already tote mobile apps in their arsenal, but prepare for future shock when remote diagnosis, doctor-patient video chat, social network support groups, and even health equipment monitoring connects to smart phones and tablets. This is all made possible by recent software technology advances for HIPAA-compliance to protect patient privacy, and digital communication standards such as Health Level 7 (HL7) that allow a wide range of medical devices to talk with each other and external devices.

Industrial Control Systems

Industrial Process Control is a set of devices and software tools that allow factory managers to monitor and control the operation of manufacturin or industrial production equipment.  A new generation of wireless sensor technology, called Zigbee, allows industries to create mesh networks of sensors, so the next time that pressure gauge is reading a bit too high, or a silo level is a bit too low, you’re notified, instantly in your pocket or on your tablet.

Customer Support

Making that phone call to customer support is about as fun as making an appointment for a root canal.  Yet, what if the phone call wasn’t a call at all?  Mobile technologies are being deployed by businesses to make customer support communication fast, but the next step is all about eliminating customer support in favor of customer service. Right now, just sending an @reply on Twitter and many top brands will respond very rapidly (with no hold music).  And when companies think of their customers more like the way they think of partners, your connection to the folks who make, manage and distribute consumer products changes your whole product usage experience.   Image credit:

How Social Media Makes Us More Polite

Spam. Viruses. Flame wars. Trolls. Hackers. The internet sure seems like a rough and tumble place. But is it possible that using social media makes us more respectful of each other and more polite? Being polite whether online or off, is strongly influenced by your relationship with person on the other side of the conversation. If there’s no chance of seeing that person in real life, doing business with them in the future, or understanding much about them—there’s a much higher incidence of an impolite interaction: anonymity breeds rudeness.

And the more anonymous, the more rude. So, online forums that allow anonymous posting or little user validation are breeding grounds for bile and the vile (see 4chan… if you dare). Even Reddit, which requires that its users reveal zero personal information, still has a reputation system that

On the contrary social networks that promote authenticity and personal identification. Be yourself, and you have to stand by your rep. LinkedIn, AirBnB, even EBay are great examples, though more private invite-only social networks like A Small World demonstrate the concept as well.

Is it possible that the era when the pushiest jerk gets ahead is coming to a close? Maybe not, but the revenge of the nerds has certainly proven its might, and those who participate in the global conversation with impeccable ethics may just see rewards beyond those knowing smiles.

How to Deploy a Rails Application Anywhere with Chef

What Can You Do with Chef?

Here are a few things we did (and you can do) with Chef:

  • Deploy a Rails application to EC2
  • Quickly set up a Jenkins Continuous Integration Server
  • Allow all developers on your team to work with the same virtual machine for development and testing
In this article we will show how to create your first Chef cookbook and use it to deploy a Ruby on Rails application.

What is Chef?

Chef is a configuration management tool. It is used to automate machine configuration and integration into your infrastructure. You define your infrastructure in configuration files and Chef takes care to set up individual machines and link them together. You can read more about what Chef is here.

Chef architecture

Hosted/Private Chef

Hosted/Private Chef architecture consists of several parts:

  • Chef repository contains all your Chef artifacts. It’s recommended to have it in your version control system.
  • Developer machine issues knife commands. knife allows you to push Chef artifacts to Chef server or query information about your infrastructure from Chef server. You can also use knife to manually execute commands on nodes in your infrastructure.
  • Chef server is a central point of Chef architecture. It has all your cookbooks and settings. It tracks information about all nodes in your infrastructure.
  • Nodes are machines managed by Chef. Nodes pull cookbooks and configuration from Chef server.

Chef Solo

Chef Solo is a lightweight Chef solution. It doesn’t require you to have Chef server. However it’s not designed to manage multiple machines. In Chef Solo mode you need to have your Chef repository on the node you’re going to set up. Later in this article we show you how to use Chef Solo to test your Chef recipes and set up Vagrant virtual machine. You can also use Chef Solo to set up development environment on your computer.

Chef artifacts


Cookbooks are the most important Chef artifacts. They contain default configuration, configuration file templates, resource providers, helper scripts, files and recipes. The most interesting part of cookbook is recipes. Recipes are sets of instruction that perform some kind of procedure – usually installs and configures some service but not necessarily.

Data Bags

Imagine you were setting up a brand new Linux instance by hand. You’d probably create some user accounts – you could store the usernames and passwords in your Chef Data Bag. You’d probably pull your application code from a git repository – you could store the git repository url in Chef Data Bag. You might add an SSH private key – you could store the SSH private key in your Chef Data Bag. Think of the Data Bag as a key-value store containing parameters needed when you set up your new machine, or its applications. If you’re familiar with Heroku, anything that goes into your Heroku configuration is a likely candidate for your Chef Data Bag.

In practice, Chef Data Bags are saved on Chef server and available for all nodes to access. Chef also provides encrypted data bags so your passwords or access keys are secure.


Chef roles define a types of nodes in your infrastracture. They usually correspond to a service that node is running. You can use roles to group nodes. A single node can also be in multiple roles. Typical Rails application deployment infrastructure consists of the following roles:

  • Database server
  • Memcache/Redis server
  • Application server
  • Load balancer

Prepare your kitchen

Before we begin you should have git and Ruby set up on your machine.

First of all, you should start by cloning Opscode Chef repository

  git clone git://

We use a tool called Librarian-Chef . If you’re familiar with Ruby development it’s kind of  Bundler for Chef cookbooks. It downloads and manages community cookbooks that you specify in Cheffile.

Install Librarian-Chef by running (you might need to use sudo command if you use system Ruby)

  gem install librarian-chef

Then initialize Librarian in your Chef repository with

  librarian-chef init

This command will create Cheffile in your Chef repository. We are going to specify all our dependencies in that file. To deploy our example application your Cheffile should look like this:

  site ''
  cookbook 'application_ruby'
  cookbook 'apt'
  cookbook 'user'
  cookbook 'ruby_build'

Now pull these community cookbooks with Librarian

  librarian-chef install

Librarian will pull specified cookbooks and their dependencies to the cookbooks folder and create a Cheffile.lock file. You should commit both Cheffile and Cheffile.lock to your repository. You don’t need to commit cookbooks folder because you can issue install command and Librarian will pull exact same cookbook versions. You should not touch cookbooks folder and let Librarian manage it for you. Librarian will overwrite any changes you make inside that folder.

Create a folder, for example ‘my-cookbooks’, for new cookbooks.

The Application Cookbook

Now we are going to create our first cookbook. Lets start by creating a new folder called ‘example_application’ inside ‘my-cookbooks’ folder. We only need recipes folder inside it. We are going to create a really simple cookbook with just a single recipe.

Inside our new cookbook we create a recipe to deploy our Ruby on Rails application. We will use community cookbooks application and application_ruby. They define a lot of useful resources. It’s worth to take a look at source code of those two cookbooks to get a better sense of how Chef deploys applications.

There are a lot of different ways to install Ruby on your server and each of them has pros and cons.  In this example we choose to build Ruby from source, but here are some options and our opinions on their merits:

  • RVM/rbenv system-wide or user install. While these tools are really great for development environments because they allow you to manage multiple Ruby installations they are not that great on server side. Both RVM/rbenv require you to setup your shell environment in some ways to be able to manage Rubies for you. That’s simple to do on your development machine – just customize the shell you use. However on server side you run processes as different users and they can be invoked from different scripts. For example you have monit/runit or just init.d script that runs your app server that’s usually started as root, then you have tasks invoked by cron by a different user and then you probably have your deploy user running tasks like db:migrate. You need to make sure everything is RVM/rbenv aware.
  • apt/yum package – installs Ruby system-wide. Packages are usually pretty old so you cannot use latest Ruby versions. It’s possible to have multiple rubies but you can only change Ruby version system wide.
  • build from source code yourself – this option has the same problems as package installation but lets you install latest Ruby versions.

We choose to build Ruby from source code. This approach keeps things simple to set up. It saves time and avoids running into errors. We might need to switch to a different approach when we need to host multiple applications on the same server.

Put this recipe to default.rb file under recipes folder:

  # ensure the local APT package cache is up to date
  include_recipe 'apt'
  # install ruby_build tool which we will use to build ruby
  include_recipe 'ruby_build'
  ruby_build_ruby '1.9.3-p362' do
    prefix_path '/usr/local/'
    environment 'CFLAGS' => '-g -O2'
    action :install
  gem_package 'bundler' do
    version '1.2.3'
    gem_binary '/usr/local/bin/gem'
    options '--no-ri --no-rdoc'
  # we create new user that will run our application server
  user_account 'deployer' do
    create_group true
    ssh_keygen false
  # we define our application using application resource provided by application cookbook
  application 'app' do
    owner 'deployer'
    group 'deployer'
    path '/home/deployer/app'
    revision 'chef_demo'
    repository 'git://'
    rails do
      bundler true
    unicorn do
      worker_processes 2

There are various resources provided by application_ruby cookbook that we can configure for our application. The application definition is pretty much self explanatory at high level. You should dive into application and application_ruby source code if you’re interested in how it works.

Deploy to a Vagrant virtual machine

Vagrant is a wrapper for Oracle’s Virtual Box that allows you to create and manage headless virtual machines. It’s main purpose is to help you create reproducible development environments. For that purpose Vagrant lets you use various provisioning tools like Puppet, Chef Solo or Chef Client. It’s also a great way to test your Chef cookbooks. You can Install Vagrant from here.

Once you have Vagrant installed, add a base image by running “vagrant box add <name> <image url>” command. It will download and register a new base image. In this case we use Ubuntu 12.04 image named precise64.

  vagrant box add precise64

Create a new Vagrant environment by running “vagrant init <image name>”. This will create initial Vagrantfile with default environment settings.

  vagrant init precise64

Configure Chef Solo for your virtual machine like this:

  config.vm.provision :chef_solo do |chef|
    chef.cookbooks_path = ["cookbooks", "my-cookbooks"]
    chef.roles_path = "roles"
    chef.data_bags_path = "data_bags"
    chef.add_recipe "example_application"

Start your virtual machine by running

  vagrant up

When this command finishes we will have a virtual machine with our application deployed on it.

Deploy to your local machine using Chef Solo

I recommend to use Vagrant for testing your Chef recipes. However, it’s good to know how to use Chef Solo to be able to run recipes on a local machine. If you don’t have hosted/private Chef setup, you can use this method to set up remote servers too. I use Chef Solo to set up my own machine.

You need to create a Chef configuration file, let’s call it solo.rb:

root = File.absolute_path(File.dirname(__FILE__))
file_cache_path root
cookbook_path [root + '/cookbooks', root + '/my-cookbooks']
role_path root + '/roles'

You need another file with configuration JSON, let’s call it solo.json:

{ "run_list": ["recipe[example_application]"] }

Now you can run Chef with a command:

sudo chef-solo -j solo.json -c solo.rb

Deploy to a new Amazon EC2 instance

(You need to set up hosted/private Chef and knife tool for this)

We found Knife EC2 plugin to be really useful for setting up new Amazon EC2 instances. It’s a Ruby gem that you can install by running

  gem install knife-ec2

Now it’s just a matter of a single command to create and set up a new Amazon EC2 instance with our application:

  knife ec2 server create
    -A 'Your AWS Access Key ID'
    -K 'Your AWS Secret Access Key'
    -S 'The AWS SSH key id'
    -I ami-bb4f69fe # Amazon Machine Image (AMI) name
    -x ubuntu # username to login with
    -d ubuntu12.04-gems # bootstrap script for vanilla OS
    -f m1.small # EC2 instance type
    -r 'recipe[example_application]' # run list for new node

Wrap up

Congratulations! If you were following along, you now have a recipe that you can use to deploy our example Ruby on Rails application on any machine. We hope it helps you to create easily reproducible infrastructure for your applications which allows you to move to a different cloud provider or your own hardware at any time.

Concrete Interactive Selected as Early Leap Motion Developer

It’s a privilege for us to be working with the futuristic visionaries over at Leap Motion.  In the coming weeks we’ll report our progress building the first apps that take advantage of their 3D hand tracking control.

Concrete Interactive brings extensive 3D graphics and interaction experience to the world of Leap’s new motion control.  But for a start, we thought we’d remake the classic carnival game and are proud to bring you a little demo of what the Leap is capable of with….Smack-A-Rat.

Smack-A-Rat Game for Leap Motion Controller from Brett Levine on Vimeo.

Code as Content

Modern software practices were brought up on computer science pedagogy that promotes clean and beautiful object-oriented (OO) architectures. They were fed a healthy dose of open source-powered movement toward behavior driven development (BDD). Not quite all grown up yet, these efforts are aimed at producing high quality software outcomes. There is still a missing link in the chain that connects a user’s interaction with a software developer’s intention when it comes to content.

The web is full of content. And the line between content and software is becoming more difficult to detect. While many can make persuasive arguments that coding is an art–“artwork” at least has a different character as “content” and it doesn’t nest so nicely in software architects’ object hierarchies. Content requires different standards from both an OO and BDD point of view.

On one side of the line, for example, no one would rightly call an image file “software”. Similarly no one would call a sorting algorithm “artwork”. A procedural flocking animation written for the HTML 5 canvas though is tricky.

When an artist (call them a creative coder) performs that mystical creative act of bringing a piece of artistic content into the world, more and more these days that act may involve writing code, but the artist may consider that code just a means to an end–expression.

So the point isn’t about drawing a line, so to speak, between software and art. This isn’t a metaphysical discussion. The application of OO and BDD techniques to art are different, and recognizing how to architect and how to test content vs code may mean the difference between expending resources effectively, or squandering them. I’ll save the implications of object-oriented architecture on art for another day, for now I’ll focus on the BDD side.

What would behavior driven development tests of artwork look like? Remember BDD starts with user stories in human terms: what experience do I want the person using my software to have? So for example, a common BDD story would be written like this: “As a Facebook user, when I login, then I should see my news feed.” And a programmer could write software tests that perform just those steps: login to facebook with a valid username and password, check what page appears, is it the user’s news feed? If so, PASS.

If the question turns to, “what experience do I want the person viewing my content to have,” the BDD user story might become, “As a Facebook user, when I scroll my news feed, then I want to feel like it has a physical weightiness.” Yikes, how does a programmer write a software test for this nicely articulated artistic motivation? Weightiness? Well, artist, what do you mean by that?

Effective BDD would require the artistic motivation to be reduced to purely software terms, perhaps: “As a Facebook user, when I scroll my news feed, then its scrolling motion should decelerate gradually when I release my mouse.” This is not a naturally artistic statement. The artist had in mind to make a page that feels weighty, so involving words like deceleration pollutes the user story with the “hows” instead of sticking to the “whats.”

As opposed to artists, for non-programmer product designers, BDD requires expressing in human terms what software should do. But it can only be successful if the language those product designers use to describe the software is actually testable by the engineers who write the tests.

Let’s face it, art is hard to describe, and unless its creator is also a bonafide critic, even artists themselves usually do not possess the language to describe the reaction they wish to engender in their audience. Content is in fact, not software, and should not be tested as software.

Content by its nature, even if it involves software, does not fit a software testing model. Perhaps a new model is needed to provide a softer, squishier, more qualitative way to evaluate whether content is operating effectively.

We can test whether an image renders on the screen correctly. We can test whether an animation with easing is slowing according to a cosine or a cubic curve. But how do we test if a user thinks the Facebook news feed feels weighty?

I think the answer is: ask them. BDD forms part of a software “immune system” but it is only one layer, just as unit testing, and human quality assurance testing are others. With all of today’s emphasis on metrics, it is time for BDD to naturally extend outward, beyond the bounds of the continuous integration servers that execute software tests with each commit, beyond the confines of the software develop’s network, and out into the world, where people can look at a software feature combined with its content, react to it, and tell you what they think–analytically, scientifically, measurably.

I imagine a BDD user story of the future to tie with an analytics system like this, “As a facebook user when I scroll my news feed I should feel as if it were weighty.” And I imagine a test report with a result such as “18% feel the news feed is somewhat weight, 48% feel the news feed is very weight, 4% feel the news feed is too weighty.” Some metrics frameworks such as Qualaroo, or conversion experts such as CROmetrics will provide this sort of data, but to use them, you have to separately set up the metrics system, ask the right questions and compare the results outside the normal design and development cycle.

Software maturity would benefit if the distance between the qualitative usage measurement (in the wild) and the design of software were shortened, in the same way that BDD shortens the distance between usage correctness and software implementation. It is time for customer experience data to integrate seamlessly with BDD toolkits so that artist intentions, no matter how much or little code is involved, align with the business, usage and artistic intentions of software products.

Credits: What Is Art

How Concrete and Zencoder brought high quality video to HitRECord

Thanks to Zencoder and HitRECord for the kind words. Encoding high quality video was a snap with their Ruby on Rails cloud technology.

Reposted from:

With infinite “airtime” and consumer demand going through the roof, a consistent theme in 2011 and now for 2012 is that high quality content is essential for entertaining and engaging viewers. To fill the pipes, big companies are pouring big bucks into producing original content for online distribution. Google is investing $100M in original content for YouTube. We all rejoiced as Netflix resurrected “Arrested Development”, and Yahoo landed a Hollywood whale in Tom Hanks and his animated series.

In recognition of the importance of content in advancing the online video industry, we’re highlighting our partner Concrete Interactive and the amazing product that they’ve built for HitRECord. HitRECord, founded by actor Joseph Gordon-Levitt, not only facilitates the creation of high quality Internet video but is also unique in that it’s democratizing, and bringing real innovation to, the process of content creation.

Founded in 2005, HitRECord has established itself as a unique destination for video artists, filmmakers, writers, animators, musicians, and videographers to collaborate and interact with Gordon-Levitt and others on a wide variety of creative endeavors. HitRECord’s library has grown to over 20,000 complete videos. Almost 1,000 contributors have used HitRECord to create films and video, which are available in the “TheRecord Store”. Thousands more have contributed to films shown at HitRECord’s live shows. When a production makes money, there’s a 50/50 revenue split between HitRECord and the co-creators.

Even though we didn’t get to hobnob with the stars in Park City, we were excited to hear of the successful showcase of HitRECord’s works, which were featured last week at the prestigious Eccles Theater as part of the 2012 Sundance Film Festival. Joseph Gordon-Levitt emceed a sold-out screening, presenting films that were developed on HitRECord.

A few factors are making this explosion of Internet content possible. First, it’s getting cheaper and cheaper to create content. The cost of HD cameras and powerful editing suites have fallen rapidly, making it possible to create professional content on a small budget. On the distribution end, highly efficient software and cloud-based infrastructure make it possible to rapidly deploy Internet video applications with a relatively small upfront investment.

To build HitRECord, Concrete Interactive, the San Francisco-based boutique web application development firm, used technologies such as Rubyon Rails, Heroku, Amazon S3, JQuery, New Relic, Splunk, Airbrake, and Kissmetrics. They use Zencoder, and relied on our industry-leading performance to encode HitRECord’s library. They were able to convert tens-of-thousands of videos into web and mobile outputs overnight, for playback across a variety of devices.

Most importantly, the end product is a very high performance platform that facilitates the creative output of its users. Joseph Gordon-Levitt, or “RegularJoe” on HitRECord, said, “Since working with Concrete Interactive and Zencoder, HitRECord’s video upload process has been smoother, the quality is excellent and processing times are really quick.”

Software Quality

What is Software Quality?

If you speak of software quality, what do you mean?  Product managers generally mean they want their features to work as designed across all target platforms.  Project managers want software to be completed on time and on budget.  Executives want customers to pay good money for a good experience.  And software developers want to build efficient code that gets deployed to their user audience.  In discussions with my clients, that’s usually about as far as it goes, “We want high quality software.”

In this article, I discuss ways to dissect software quality into six relevant areas: ruggedness, architecture, performance, scale, security, and process.  Each of these aspects of quality can then be prioritized, and following each area I highlight actionable ways to improve (or take shortcuts) in your software project.

Download this article as a PDF

Time. Features. Quality:  Pick Two.

When asked, “What is most important to you for this project?  Time, features, or quality: pick two,” almost everyone these days will actively choose to have a software project come in on time, and be of high quality, while accepting a somewhat more limited feature set.

But maybe this software adage has become as outmoded as the waterfall model. After all, the time we apply to a software project can be shorter or longer.  The features can be many or fewer.  So how can we match software quality in its manifold aspects and degrees to the goals of a project?

First it is necessary to realize that there is no one such thing as “high quality software.”  Quality, when it comes to software, is a way of saying how well a program solves its goal problem.  Let’s break that down into practical areas, since almost all software projects must face choices for budget allocation.


Software ruggedness is most commonly, and mistakenly, viewed as overall software quality.  In short, rugged software is written correctly, without bugs or user interface flaws, and works well across all its target platforms.  Building rugged software is probably the most widely understood software development practice.  Developing rugged software typically involves human force approaches such as manual quality assurance testing, bug tracking systems (eg. Fogbugz, Lighthouse, Bugzilla, Jira, Trac).

To ensure a software project becomes rugged, most software development teams “put it to the test.”  An adversarial practice is promoted between QA testers and developers.  This is often viewed as healthy, since many developers tend to label a feature as completed and move on to implementing the next ticket in their queue before engaging in the arduous task of cross-platform and stress testing.  In addition the practice of automated testing, using tools such as Selenium, Fake or custom scripts, aids the manual processes, but does not reduce the force required, it just shifts the human testing burden on to a new automated test construction burden, which in turn may have inaccuracies.


The single most important aspect of producing high quality software is matching software architecture to its problem domain (read Domain Driven Design).  At times the ultimate domain is not known, such as when a start-up company tries a new concept in the marketplace.  Often the iterative nature of market feedback, combined with an agile software development process, results in patchwork architecture ill-suited to the adapted domain.  Refactoring is the process by which a new software architecture is put in place of an existing one, containing the same feature set but, with increased capacity to solve problems in a new domain.

Refactoring is often seen by product managers as a costly and time-consuming enterprise that developers want for “code cleanliness” but with dire consequences on schedule and budget.  Yet, without refactoring a “dishes in the sink” approach can become endemic to the team mindset, wherein another dish, or in this case a poorly architected feature, gets added to the pile until a major reckoning is required.

Agile methodologies reduce the need for major refactoring by dissolving it into a relatively continuous process done along with the tasks of each sprint.  By shortening milestone deadlines into sprints, say three or four weeks, and reconciling the architecture improvements needed for each milestone, a clean household is maintained, and software architecture is built up continually to match the next few milestones.

  • Enforce use of Design Patterns: have developers do brown bag lunches, put this book on every developer’s desk.
  • Code up architecture for the current release.  Draw out architecture for the next two releases.


When it comes to massive database crunching, 3D game development, or scientific computing, careful attention is normally paid to algorithmic performance (as opposed to application scaling, addressed below).  However, in the course of building web applications, 2d flash games, or mobile apps, performance is usually examined when it becomes an issue.  This could be when the number of sprites in a flash game slows rendering, when mobile memory overflow causes crashes, or when web database queries become unacceptably long.  Code performance is another area where attention to quality at the right level should be built in as solutions are developed.


In contrast to performance, scale is often considered at the outset of web application projects, where eager management teams assert the need for servicing hundreds of thousands of concurrent users a la Twitter.  Technologies such as Ruby on Rails, Heroku, Google App Engine, Virtualization, and Amazon EC2 make scaling web applications easier than ever, and there is very little development cost to throwing more hardware at many common problems.  Of course, database design, server APIs and application server caching schemes go a long way to reducing these service costs by using them more efficiently.


Probably the most overlooked area of emphasis when developing a new software product or web service is security.  Frequently security breeches are recoverable, except the damage done to customer opinions (take the recent Sony debacle).  Life does go on (usually) after getting hacked, having a Distributed Denial of Service (DDoS) attack take down your site or application access for a time.  Scrambling to plug holes is the normal route.  But as with all other areas of software quality, a few simple steps at the beginning or middle of the development cycle can prevent the vast majority of infiltration.  Spending a day or two on threat modeling, updating security patches, and reading documentation about how to defend against common attacks are equivalent to locking your door when you leave the house: not impenetrable, but likely to encourage the attacker to move along to the next property.


Light enough not be burdensome, sophisticated enough to fit the tasks at hand, software development processes should fit the team, the company and the product.  Easier said than done.  Modern processes such as Agile overturn decades of doctrine involving heavy process such as lengthy code comments and prolonged design cycles in favor of iteration and well, agility.  The best software processes are fun, and since development teams are human, it is easier to convince them to do fun things than arduous ones.  The danger of a methodology such as Agile, or goal definition such as minimum viable product as to use as an excuse for slipshod project management and code practices.  To find balance in this realm is to focus on just what is important and have the right toolset to support just that narrowest of procedures: daily scrum meetings, hand-drawn feature sketches, presenting paper prototypes to potential customers, continuous deployment tools like Hudson and Capistrano.

  • Inculcate Test Driven Development into your process.
  • Use fast, analog approaches where possible: whiteboard, stickies, physical calendar on the wall.
  • Draw paper prototypes by hand (or with a ruler) and iterate frequently.
  • Tell your project manager to scan each design iteration and post to a wiki page.
  • Go to a cafe and buy strangers lunch in exchange for spending 5 minutes playing with your product on a laptop or phone.


It is my hope that by developing a deeper working definition and understanding of these six aspects of software quality, software projects will avoid many quality-related pitfalls, and make better decisions about how to allocate effort.  It isn’t necessary to spend time on every one of these areas, but it is necessary to consciously decide whether or not to.